00:00:00.001 Started by upstream project "autotest-nightly" build number 4279 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3642 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.149 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.150 The recommended git tool is: git 00:00:00.150 using credential 00000000-0000-0000-0000-000000000002 00:00:00.152 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.202 Fetching changes from the remote Git repository 00:00:00.204 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.280 > git --version # 'git version 2.39.2' 00:00:00.280 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.237 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.249 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.263 Checking out Revision 2fb890043673bc2650cdb1a52838125c51a12f85 (FETCH_HEAD) 00:00:07.263 > git config core.sparsecheckout # timeout=10 00:00:07.275 > git read-tree -mu HEAD # timeout=10 00:00:07.291 > git checkout -f 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=5 00:00:07.311 Commit message: "jenkins: update TLS certificates" 00:00:07.312 > git rev-list --no-walk 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=10 00:00:07.410 [Pipeline] Start of Pipeline 00:00:07.425 [Pipeline] library 00:00:07.427 Loading library shm_lib@master 00:00:07.427 Library shm_lib@master is cached. Copying from home. 00:00:07.442 [Pipeline] node 00:00:07.465 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.467 [Pipeline] { 00:00:07.476 [Pipeline] catchError 00:00:07.478 [Pipeline] { 00:00:07.486 [Pipeline] wrap 00:00:07.492 [Pipeline] { 00:00:07.501 [Pipeline] stage 00:00:07.504 [Pipeline] { (Prologue) 00:00:07.521 [Pipeline] echo 00:00:07.522 Node: VM-host-SM38 00:00:07.528 [Pipeline] cleanWs 00:00:07.539 [WS-CLEANUP] Deleting project workspace... 00:00:07.539 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.545 [WS-CLEANUP] done 00:00:07.789 [Pipeline] setCustomBuildProperty 00:00:07.883 [Pipeline] httpRequest 00:00:08.411 [Pipeline] echo 00:00:08.413 Sorcerer 10.211.164.20 is alive 00:00:08.422 [Pipeline] retry 00:00:08.424 [Pipeline] { 00:00:08.435 [Pipeline] httpRequest 00:00:08.441 HttpMethod: GET 00:00:08.442 URL: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:08.442 Sending request to url: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:08.444 Response Code: HTTP/1.1 200 OK 00:00:08.444 Success: Status code 200 is in the accepted range: 200,404 00:00:08.445 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:10.475 [Pipeline] } 00:00:10.496 [Pipeline] // retry 00:00:10.503 [Pipeline] sh 00:00:10.790 + tar --no-same-owner -xf jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:10.807 [Pipeline] httpRequest 00:00:11.182 [Pipeline] echo 00:00:11.184 Sorcerer 10.211.164.20 is alive 00:00:11.194 [Pipeline] retry 00:00:11.196 [Pipeline] { 00:00:11.210 [Pipeline] httpRequest 00:00:11.216 HttpMethod: GET 00:00:11.217 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.217 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.240 Response Code: HTTP/1.1 200 OK 00:00:11.240 Success: Status code 200 is in the accepted range: 200,404 00:00:11.241 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:46.330 [Pipeline] } 00:00:46.348 [Pipeline] // retry 00:00:46.356 [Pipeline] sh 00:00:46.642 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:49.231 [Pipeline] sh 00:00:49.517 + git -C spdk log --oneline -n5 00:00:49.517 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:49.517 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:49.517 4bcab9fb9 correct kick for CQ full case 00:00:49.517 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:49.517 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:49.538 [Pipeline] writeFile 00:00:49.553 [Pipeline] sh 00:00:49.839 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.852 [Pipeline] sh 00:00:50.136 + cat autorun-spdk.conf 00:00:50.136 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.136 SPDK_TEST_NVME=1 00:00:50.136 SPDK_TEST_FTL=1 00:00:50.136 SPDK_TEST_ISAL=1 00:00:50.136 SPDK_RUN_ASAN=1 00:00:50.136 SPDK_RUN_UBSAN=1 00:00:50.136 SPDK_TEST_XNVME=1 00:00:50.136 SPDK_TEST_NVME_FDP=1 00:00:50.136 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.145 RUN_NIGHTLY=1 00:00:50.147 [Pipeline] } 00:00:50.160 [Pipeline] // stage 00:00:50.176 [Pipeline] stage 00:00:50.178 [Pipeline] { (Run VM) 00:00:50.191 [Pipeline] sh 00:00:50.477 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:50.477 + echo 'Start stage prepare_nvme.sh' 00:00:50.477 Start stage prepare_nvme.sh 00:00:50.477 + [[ -n 10 ]] 00:00:50.477 + disk_prefix=ex10 00:00:50.477 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:50.477 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:50.477 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:50.477 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.477 ++ SPDK_TEST_NVME=1 00:00:50.477 ++ SPDK_TEST_FTL=1 00:00:50.477 ++ SPDK_TEST_ISAL=1 00:00:50.477 ++ SPDK_RUN_ASAN=1 00:00:50.477 ++ SPDK_RUN_UBSAN=1 00:00:50.477 ++ SPDK_TEST_XNVME=1 00:00:50.477 ++ SPDK_TEST_NVME_FDP=1 00:00:50.477 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.477 ++ RUN_NIGHTLY=1 00:00:50.478 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:50.478 + nvme_files=() 00:00:50.478 + declare -A nvme_files 00:00:50.478 + backend_dir=/var/lib/libvirt/images/backends 00:00:50.478 + nvme_files['nvme.img']=5G 00:00:50.478 + nvme_files['nvme-cmb.img']=5G 00:00:50.478 + nvme_files['nvme-multi0.img']=4G 00:00:50.478 + nvme_files['nvme-multi1.img']=4G 00:00:50.478 + nvme_files['nvme-multi2.img']=4G 00:00:50.478 + nvme_files['nvme-openstack.img']=8G 00:00:50.478 + nvme_files['nvme-zns.img']=5G 00:00:50.478 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:50.478 + (( SPDK_TEST_FTL == 1 )) 00:00:50.478 + nvme_files["nvme-ftl.img"]=6G 00:00:50.478 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:50.478 + nvme_files["nvme-fdp.img"]=1G 00:00:50.478 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:50.478 + for nvme in "${!nvme_files[@]}" 00:00:50.478 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:00:50.738 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.738 + for nvme in "${!nvme_files[@]}" 00:00:50.738 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:00:51.677 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:51.677 + for nvme in "${!nvme_files[@]}" 00:00:51.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:00:51.677 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.677 + for nvme in "${!nvme_files[@]}" 00:00:51.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:00:51.677 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.677 + for nvme in "${!nvme_files[@]}" 00:00:51.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:00:51.677 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.677 + for nvme in "${!nvme_files[@]}" 00:00:51.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:00:51.937 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.937 + for nvme in "${!nvme_files[@]}" 00:00:51.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:00:52.510 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.510 + for nvme in "${!nvme_files[@]}" 00:00:52.510 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:00:52.771 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:52.771 + for nvme in "${!nvme_files[@]}" 00:00:52.771 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:00:53.342 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.342 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:00:53.342 + echo 'End stage prepare_nvme.sh' 00:00:53.342 End stage prepare_nvme.sh 00:00:53.355 [Pipeline] sh 00:00:53.639 + DISTRO=fedora39 00:00:53.639 + CPUS=10 00:00:53.639 + RAM=12288 00:00:53.639 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.639 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:53.639 00:00:53.639 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:53.639 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:53.639 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:53.639 HELP=0 00:00:53.639 DRY_RUN=0 00:00:53.639 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:00:53.639 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:53.639 NVME_AUTO_CREATE=0 00:00:53.639 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:00:53.639 NVME_CMB=,,,, 00:00:53.639 NVME_PMR=,,,, 00:00:53.639 NVME_ZNS=,,,, 00:00:53.639 NVME_MS=true,,,, 00:00:53.639 NVME_FDP=,,,on, 00:00:53.639 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.639 SPDK_VAGRANT_VMCPU=10 00:00:53.639 SPDK_VAGRANT_VMRAM=12288 00:00:53.639 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.639 SPDK_VAGRANT_HTTP_PROXY= 00:00:53.639 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.639 SPDK_OPENSTACK_NETWORK=0 00:00:53.639 VAGRANT_PACKAGE_BOX=0 00:00:53.639 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:53.639 FORCE_DISTRO=true 00:00:53.639 VAGRANT_BOX_VERSION= 00:00:53.639 EXTRA_VAGRANTFILES= 00:00:53.639 NIC_MODEL=e1000 00:00:53.639 00:00:53.639 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:53.639 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:56.189 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.451 ==> default: Creating image (snapshot of base box volume). 00:00:56.713 ==> default: Creating domain with the following settings... 00:00:56.713 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731925762_6a83cd9ce881ca0bd377 00:00:56.713 ==> default: -- Domain type: kvm 00:00:56.713 ==> default: -- Cpus: 10 00:00:56.713 ==> default: -- Feature: acpi 00:00:56.713 ==> default: -- Feature: apic 00:00:56.713 ==> default: -- Feature: pae 00:00:56.713 ==> default: -- Memory: 12288M 00:00:56.713 ==> default: -- Memory Backing: hugepages: 00:00:56.713 ==> default: -- Management MAC: 00:00:56.713 ==> default: -- Loader: 00:00:56.713 ==> default: -- Nvram: 00:00:56.713 ==> default: -- Base box: spdk/fedora39 00:00:56.713 ==> default: -- Storage pool: default 00:00:56.713 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731925762_6a83cd9ce881ca0bd377.img (20G) 00:00:56.713 ==> default: -- Volume Cache: default 00:00:56.713 ==> default: -- Kernel: 00:00:56.713 ==> default: -- Initrd: 00:00:56.713 ==> default: -- Graphics Type: vnc 00:00:56.713 ==> default: -- Graphics Port: -1 00:00:56.713 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.713 ==> default: -- Graphics Password: Not defined 00:00:56.713 ==> default: -- Video Type: cirrus 00:00:56.713 ==> default: -- Video VRAM: 9216 00:00:56.713 ==> default: -- Sound Type: 00:00:56.713 ==> default: -- Keymap: en-us 00:00:56.713 ==> default: -- TPM Path: 00:00:56.713 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.713 ==> default: -- Command line args: 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:56.713 ==> default: -> value=-drive, 00:00:56.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:56.713 ==> default: -> value=-device, 00:00:56.713 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.974 ==> default: Creating shared folders metadata... 00:00:56.974 ==> default: Starting domain. 00:00:58.362 ==> default: Waiting for domain to get an IP address... 00:01:16.496 ==> default: Waiting for SSH to become available... 00:01:16.496 ==> default: Configuring and enabling network interfaces... 00:01:19.041 default: SSH address: 192.168.121.200:22 00:01:19.041 default: SSH username: vagrant 00:01:19.041 default: SSH auth method: private key 00:01:20.951 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:29.102 ==> default: Mounting SSHFS shared folder... 00:01:31.017 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:31.017 ==> default: Checking Mount.. 00:01:31.959 ==> default: Folder Successfully Mounted! 00:01:32.220 00:01:32.220 SUCCESS! 00:01:32.220 00:01:32.220 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:32.220 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:32.220 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:32.220 00:01:32.229 [Pipeline] } 00:01:32.243 [Pipeline] // stage 00:01:32.252 [Pipeline] dir 00:01:32.253 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:32.254 [Pipeline] { 00:01:32.266 [Pipeline] catchError 00:01:32.268 [Pipeline] { 00:01:32.279 [Pipeline] sh 00:01:32.561 + vagrant ssh-config --host vagrant 00:01:32.561 + sed -ne '/^Host/,$p' 00:01:32.561 + tee ssh_conf 00:01:35.185 Host vagrant 00:01:35.185 HostName 192.168.121.200 00:01:35.185 User vagrant 00:01:35.185 Port 22 00:01:35.185 UserKnownHostsFile /dev/null 00:01:35.185 StrictHostKeyChecking no 00:01:35.185 PasswordAuthentication no 00:01:35.185 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:35.185 IdentitiesOnly yes 00:01:35.185 LogLevel FATAL 00:01:35.185 ForwardAgent yes 00:01:35.185 ForwardX11 yes 00:01:35.185 00:01:35.202 [Pipeline] withEnv 00:01:35.206 [Pipeline] { 00:01:35.220 [Pipeline] sh 00:01:35.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:35.505 source /etc/os-release 00:01:35.505 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.505 # Minimal, systemd-like check. 00:01:35.505 if [[ -e /.dockerenv ]]; then 00:01:35.505 # Clear garbage from the node'\''s name: 00:01:35.505 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.505 # $HOSTNAME is the actual container id 00:01:35.505 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.505 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.505 # We can assume this is a mount from a host where container is running, 00:01:35.505 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.505 container="$(< /etc/hostname) ($agent)" 00:01:35.505 else 00:01:35.505 # Fallback 00:01:35.505 container=$agent 00:01:35.505 fi 00:01:35.505 fi 00:01:35.505 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.505 ' 00:01:35.777 [Pipeline] } 00:01:35.792 [Pipeline] // withEnv 00:01:35.800 [Pipeline] setCustomBuildProperty 00:01:35.813 [Pipeline] stage 00:01:35.815 [Pipeline] { (Tests) 00:01:35.831 [Pipeline] sh 00:01:36.117 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:36.394 [Pipeline] sh 00:01:36.683 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:36.963 [Pipeline] timeout 00:01:36.964 Timeout set to expire in 50 min 00:01:36.966 [Pipeline] { 00:01:36.983 [Pipeline] sh 00:01:37.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:37.835 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:37.851 [Pipeline] sh 00:01:38.136 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:38.413 [Pipeline] sh 00:01:38.694 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:38.975 [Pipeline] sh 00:01:39.260 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:39.521 ++ readlink -f spdk_repo 00:01:39.521 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:39.521 + [[ -n /home/vagrant/spdk_repo ]] 00:01:39.521 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:39.521 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:39.521 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:39.521 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:39.521 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:39.521 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:39.521 + cd /home/vagrant/spdk_repo 00:01:39.521 + source /etc/os-release 00:01:39.521 ++ NAME='Fedora Linux' 00:01:39.521 ++ VERSION='39 (Cloud Edition)' 00:01:39.521 ++ ID=fedora 00:01:39.521 ++ VERSION_ID=39 00:01:39.521 ++ VERSION_CODENAME= 00:01:39.521 ++ PLATFORM_ID=platform:f39 00:01:39.521 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:39.521 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.521 ++ LOGO=fedora-logo-icon 00:01:39.521 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:39.521 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.521 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:39.521 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.521 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.521 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.521 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:39.521 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.521 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:39.521 ++ SUPPORT_END=2024-11-12 00:01:39.521 ++ VARIANT='Cloud Edition' 00:01:39.521 ++ VARIANT_ID=cloud 00:01:39.521 + uname -a 00:01:39.521 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:39.521 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:40.046 Hugepages 00:01:40.046 node hugesize free / total 00:01:40.046 node0 1048576kB 0 / 0 00:01:40.046 node0 2048kB 0 / 0 00:01:40.046 00:01:40.046 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.046 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:40.046 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:40.046 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:40.046 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:40.307 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:40.307 + rm -f /tmp/spdk-ld-path 00:01:40.307 + source autorun-spdk.conf 00:01:40.307 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.307 ++ SPDK_TEST_NVME=1 00:01:40.307 ++ SPDK_TEST_FTL=1 00:01:40.307 ++ SPDK_TEST_ISAL=1 00:01:40.307 ++ SPDK_RUN_ASAN=1 00:01:40.307 ++ SPDK_RUN_UBSAN=1 00:01:40.307 ++ SPDK_TEST_XNVME=1 00:01:40.307 ++ SPDK_TEST_NVME_FDP=1 00:01:40.307 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.307 ++ RUN_NIGHTLY=1 00:01:40.307 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.307 + [[ -n '' ]] 00:01:40.307 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:40.307 + for M in /var/spdk/build-*-manifest.txt 00:01:40.307 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:40.307 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.307 + for M in /var/spdk/build-*-manifest.txt 00:01:40.307 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.307 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.307 + for M in /var/spdk/build-*-manifest.txt 00:01:40.307 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.307 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.307 ++ uname 00:01:40.307 + [[ Linux == \L\i\n\u\x ]] 00:01:40.307 + sudo dmesg -T 00:01:40.307 + sudo dmesg --clear 00:01:40.307 + dmesg_pid=5028 00:01:40.307 + [[ Fedora Linux == FreeBSD ]] 00:01:40.307 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.307 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.307 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.307 + sudo dmesg -Tw 00:01:40.307 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.307 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.307 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.307 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.307 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.307 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.307 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.307 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.307 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.307 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.307 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.307 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.307 10:30:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:40.307 10:30:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.307 10:30:06 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:40.307 10:30:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:40.307 10:30:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.569 10:30:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:40.569 10:30:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:40.569 10:30:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:40.569 10:30:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.569 10:30:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.569 10:30:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.569 10:30:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.569 10:30:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.569 10:30:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.569 10:30:06 -- paths/export.sh@5 -- $ export PATH 00:01:40.569 10:30:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.569 10:30:06 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:40.569 10:30:06 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:40.569 10:30:06 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731925806.XXXXXX 00:01:40.569 10:30:06 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731925806.Iyibrg 00:01:40.569 10:30:06 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:40.569 10:30:06 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:40.569 10:30:06 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:40.569 10:30:06 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:40.569 10:30:06 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.569 10:30:06 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:40.569 10:30:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:40.569 10:30:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.569 10:30:06 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:40.569 10:30:06 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:40.569 10:30:06 -- pm/common@17 -- $ local monitor 00:01:40.569 10:30:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.569 10:30:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.569 10:30:06 -- pm/common@25 -- $ sleep 1 00:01:40.569 10:30:06 -- pm/common@21 -- $ date +%s 00:01:40.569 10:30:06 -- pm/common@21 -- $ date +%s 00:01:40.569 10:30:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731925806 00:01:40.569 10:30:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731925806 00:01:40.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731925806_collect-vmstat.pm.log 00:01:40.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731925806_collect-cpu-load.pm.log 00:01:41.513 10:30:07 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:41.513 10:30:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.513 10:30:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.513 10:30:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.513 10:30:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.513 Mon Nov 18 10:30:07 AM UTC 2024 00:01:41.513 10:30:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.513 v25.01-pre-189-g83e8405e4 00:01:41.513 10:30:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:41.513 10:30:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:41.513 10:30:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:41.513 10:30:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:41.513 10:30:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.513 ************************************ 00:01:41.513 START TEST asan 00:01:41.513 ************************************ 00:01:41.513 using asan 00:01:41.513 10:30:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:41.513 00:01:41.513 real 0m0.000s 00:01:41.513 user 0m0.000s 00:01:41.513 sys 0m0.000s 00:01:41.513 ************************************ 00:01:41.513 END TEST asan 00:01:41.513 ************************************ 00:01:41.513 10:30:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:41.513 10:30:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.513 10:30:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.513 10:30:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.513 10:30:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:41.513 10:30:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:41.513 10:30:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.513 ************************************ 00:01:41.513 START TEST ubsan 00:01:41.513 ************************************ 00:01:41.513 using ubsan 00:01:41.513 10:30:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:41.513 00:01:41.513 real 0m0.000s 00:01:41.513 user 0m0.000s 00:01:41.513 sys 0m0.000s 00:01:41.513 ************************************ 00:01:41.513 END TEST ubsan 00:01:41.513 ************************************ 00:01:41.513 10:30:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:41.513 10:30:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.513 10:30:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.513 10:30:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.513 10:30:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.513 10:30:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:41.775 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:41.775 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:42.037 Using 'verbs' RDMA provider 00:01:55.235 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:05.236 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:05.236 Creating mk/config.mk...done. 00:02:05.236 Creating mk/cc.flags.mk...done. 00:02:05.237 Type 'make' to build. 00:02:05.237 10:30:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:05.237 10:30:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.237 10:30:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.237 10:30:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.237 ************************************ 00:02:05.237 START TEST make 00:02:05.237 ************************************ 00:02:05.237 10:30:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:05.237 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:05.237 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:05.237 meson setup builddir \ 00:02:05.237 -Dwith-libaio=enabled \ 00:02:05.237 -Dwith-liburing=enabled \ 00:02:05.237 -Dwith-libvfn=disabled \ 00:02:05.237 -Dwith-spdk=disabled \ 00:02:05.237 -Dexamples=false \ 00:02:05.237 -Dtests=false \ 00:02:05.237 -Dtools=false && \ 00:02:05.237 meson compile -C builddir && \ 00:02:05.237 cd -) 00:02:05.237 make[1]: Nothing to be done for 'all'. 00:02:06.623 The Meson build system 00:02:06.623 Version: 1.5.0 00:02:06.623 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:06.623 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:06.623 Build type: native build 00:02:06.623 Project name: xnvme 00:02:06.623 Project version: 0.7.5 00:02:06.623 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.623 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.623 Host machine cpu family: x86_64 00:02:06.623 Host machine cpu: x86_64 00:02:06.623 Message: host_machine.system: linux 00:02:06.623 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:06.623 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:06.623 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:06.623 Run-time dependency threads found: YES 00:02:06.623 Has header "setupapi.h" : NO 00:02:06.623 Has header "linux/blkzoned.h" : YES 00:02:06.623 Has header "linux/blkzoned.h" : YES (cached) 00:02:06.623 Has header "libaio.h" : YES 00:02:06.623 Library aio found: YES 00:02:06.623 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.623 Run-time dependency liburing found: YES 2.2 00:02:06.623 Dependency libvfn skipped: feature with-libvfn disabled 00:02:06.623 Found CMake: /usr/bin/cmake (3.27.7) 00:02:06.623 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:06.623 Subproject spdk : skipped: feature with-spdk disabled 00:02:06.623 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.623 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.623 Library rt found: YES 00:02:06.623 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:06.623 Configuring xnvme_config.h using configuration 00:02:06.623 Configuring xnvme.spec using configuration 00:02:06.623 Run-time dependency bash-completion found: YES 2.11 00:02:06.623 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:06.623 Program cp found: YES (/usr/bin/cp) 00:02:06.623 Build targets in project: 3 00:02:06.623 00:02:06.623 xnvme 0.7.5 00:02:06.623 00:02:06.623 Subprojects 00:02:06.623 spdk : NO Feature 'with-spdk' disabled 00:02:06.623 00:02:06.623 User defined options 00:02:06.623 examples : false 00:02:06.623 tests : false 00:02:06.623 tools : false 00:02:06.623 with-libaio : enabled 00:02:06.623 with-liburing: enabled 00:02:06.623 with-libvfn : disabled 00:02:06.623 with-spdk : disabled 00:02:06.623 00:02:06.623 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.882 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:06.882 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:06.882 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:06.882 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:06.882 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:06.882 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:06.882 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:06.882 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:06.882 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:06.882 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:07.143 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:07.143 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:07.143 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:07.143 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:07.143 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:07.143 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:07.143 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:07.143 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:07.143 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:07.143 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:07.143 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:07.143 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:07.143 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:07.143 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:07.144 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:07.144 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:07.144 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:07.144 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:07.144 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:07.144 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:07.144 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:07.144 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:07.144 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:07.144 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:07.144 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:07.144 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:07.144 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:07.144 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:07.144 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:07.144 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:07.144 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:07.144 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:07.406 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:07.406 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:07.406 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:07.406 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:07.406 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:07.406 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:07.406 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:07.406 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:07.406 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:07.406 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:07.406 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:07.406 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:07.406 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:07.406 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:07.406 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:07.406 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:07.406 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:07.406 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:07.406 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:07.406 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:07.406 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:07.406 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:07.406 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:07.406 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:07.406 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:07.406 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:07.668 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:07.668 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:07.668 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:07.668 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:07.668 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:07.668 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:07.929 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:07.929 [75/76] Linking static target lib/libxnvme.a 00:02:07.929 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:07.929 INFO: autodetecting backend as ninja 00:02:07.929 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:07.929 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:14.520 The Meson build system 00:02:14.520 Version: 1.5.0 00:02:14.520 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.520 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.520 Build type: native build 00:02:14.520 Program cat found: YES (/usr/bin/cat) 00:02:14.520 Project name: DPDK 00:02:14.520 Project version: 24.03.0 00:02:14.520 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.520 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.520 Host machine cpu family: x86_64 00:02:14.520 Host machine cpu: x86_64 00:02:14.520 Message: ## Building in Developer Mode ## 00:02:14.520 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.520 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.520 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.520 Program python3 found: YES (/usr/bin/python3) 00:02:14.520 Program cat found: YES (/usr/bin/cat) 00:02:14.520 Compiler for C supports arguments -march=native: YES 00:02:14.520 Checking for size of "void *" : 8 00:02:14.520 Checking for size of "void *" : 8 (cached) 00:02:14.520 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.520 Library m found: YES 00:02:14.520 Library numa found: YES 00:02:14.520 Has header "numaif.h" : YES 00:02:14.520 Library fdt found: NO 00:02:14.520 Library execinfo found: NO 00:02:14.520 Has header "execinfo.h" : YES 00:02:14.520 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.520 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.520 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.520 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.520 Run-time dependency openssl found: YES 3.1.1 00:02:14.520 Run-time dependency libpcap found: YES 1.10.4 00:02:14.520 Has header "pcap.h" with dependency libpcap: YES 00:02:14.520 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.520 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.520 Compiler for C supports arguments -Wformat: YES 00:02:14.520 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.520 Compiler for C supports arguments -Wformat-security: NO 00:02:14.520 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.520 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.520 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.520 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.520 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.520 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.520 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.520 Compiler for C supports arguments -Wundef: YES 00:02:14.520 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.520 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.520 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.520 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.520 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.520 Program objdump found: YES (/usr/bin/objdump) 00:02:14.520 Compiler for C supports arguments -mavx512f: YES 00:02:14.520 Checking if "AVX512 checking" compiles: YES 00:02:14.520 Fetching value of define "__SSE4_2__" : 1 00:02:14.520 Fetching value of define "__AES__" : 1 00:02:14.520 Fetching value of define "__AVX__" : 1 00:02:14.520 Fetching value of define "__AVX2__" : 1 00:02:14.520 Fetching value of define "__AVX512BW__" : 1 00:02:14.520 Fetching value of define "__AVX512CD__" : 1 00:02:14.520 Fetching value of define "__AVX512DQ__" : 1 00:02:14.520 Fetching value of define "__AVX512F__" : 1 00:02:14.520 Fetching value of define "__AVX512VL__" : 1 00:02:14.520 Fetching value of define "__PCLMUL__" : 1 00:02:14.520 Fetching value of define "__RDRND__" : 1 00:02:14.520 Fetching value of define "__RDSEED__" : 1 00:02:14.520 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:14.520 Fetching value of define "__znver1__" : (undefined) 00:02:14.520 Fetching value of define "__znver2__" : (undefined) 00:02:14.520 Fetching value of define "__znver3__" : (undefined) 00:02:14.520 Fetching value of define "__znver4__" : (undefined) 00:02:14.520 Library asan found: YES 00:02:14.520 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.520 Message: lib/log: Defining dependency "log" 00:02:14.520 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.520 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.520 Library rt found: YES 00:02:14.520 Checking for function "getentropy" : NO 00:02:14.520 Message: lib/eal: Defining dependency "eal" 00:02:14.520 Message: lib/ring: Defining dependency "ring" 00:02:14.520 Message: lib/rcu: Defining dependency "rcu" 00:02:14.520 Message: lib/mempool: Defining dependency "mempool" 00:02:14.520 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.520 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.520 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.520 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.520 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.520 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.520 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:14.520 Compiler for C supports arguments -mpclmul: YES 00:02:14.520 Compiler for C supports arguments -maes: YES 00:02:14.520 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.520 Compiler for C supports arguments -mavx512bw: YES 00:02:14.520 Compiler for C supports arguments -mavx512dq: YES 00:02:14.520 Compiler for C supports arguments -mavx512vl: YES 00:02:14.520 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.520 Compiler for C supports arguments -mavx2: YES 00:02:14.520 Compiler for C supports arguments -mavx: YES 00:02:14.520 Message: lib/net: Defining dependency "net" 00:02:14.520 Message: lib/meter: Defining dependency "meter" 00:02:14.520 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.520 Message: lib/pci: Defining dependency "pci" 00:02:14.520 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.520 Message: lib/hash: Defining dependency "hash" 00:02:14.520 Message: lib/timer: Defining dependency "timer" 00:02:14.520 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.520 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.520 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.520 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.520 Message: lib/power: Defining dependency "power" 00:02:14.520 Message: lib/reorder: Defining dependency "reorder" 00:02:14.520 Message: lib/security: Defining dependency "security" 00:02:14.520 Has header "linux/userfaultfd.h" : YES 00:02:14.520 Has header "linux/vduse.h" : YES 00:02:14.520 Message: lib/vhost: Defining dependency "vhost" 00:02:14.520 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.520 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.520 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.520 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.520 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.520 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.520 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.520 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.520 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.520 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.520 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.520 Configuring doxy-api-html.conf using configuration 00:02:14.520 Configuring doxy-api-man.conf using configuration 00:02:14.520 Program mandb found: YES (/usr/bin/mandb) 00:02:14.520 Program sphinx-build found: NO 00:02:14.520 Configuring rte_build_config.h using configuration 00:02:14.520 Message: 00:02:14.520 ================= 00:02:14.520 Applications Enabled 00:02:14.520 ================= 00:02:14.520 00:02:14.520 apps: 00:02:14.520 00:02:14.520 00:02:14.520 Message: 00:02:14.520 ================= 00:02:14.520 Libraries Enabled 00:02:14.520 ================= 00:02:14.520 00:02:14.520 libs: 00:02:14.521 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.521 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.521 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.521 00:02:14.521 Message: 00:02:14.521 =============== 00:02:14.521 Drivers Enabled 00:02:14.521 =============== 00:02:14.521 00:02:14.521 common: 00:02:14.521 00:02:14.521 bus: 00:02:14.521 pci, vdev, 00:02:14.521 mempool: 00:02:14.521 ring, 00:02:14.521 dma: 00:02:14.521 00:02:14.521 net: 00:02:14.521 00:02:14.521 crypto: 00:02:14.521 00:02:14.521 compress: 00:02:14.521 00:02:14.521 vdpa: 00:02:14.521 00:02:14.521 00:02:14.521 Message: 00:02:14.521 ================= 00:02:14.521 Content Skipped 00:02:14.521 ================= 00:02:14.521 00:02:14.521 apps: 00:02:14.521 dumpcap: explicitly disabled via build config 00:02:14.521 graph: explicitly disabled via build config 00:02:14.521 pdump: explicitly disabled via build config 00:02:14.521 proc-info: explicitly disabled via build config 00:02:14.521 test-acl: explicitly disabled via build config 00:02:14.521 test-bbdev: explicitly disabled via build config 00:02:14.521 test-cmdline: explicitly disabled via build config 00:02:14.521 test-compress-perf: explicitly disabled via build config 00:02:14.521 test-crypto-perf: explicitly disabled via build config 00:02:14.521 test-dma-perf: explicitly disabled via build config 00:02:14.521 test-eventdev: explicitly disabled via build config 00:02:14.521 test-fib: explicitly disabled via build config 00:02:14.521 test-flow-perf: explicitly disabled via build config 00:02:14.521 test-gpudev: explicitly disabled via build config 00:02:14.521 test-mldev: explicitly disabled via build config 00:02:14.521 test-pipeline: explicitly disabled via build config 00:02:14.521 test-pmd: explicitly disabled via build config 00:02:14.521 test-regex: explicitly disabled via build config 00:02:14.521 test-sad: explicitly disabled via build config 00:02:14.521 test-security-perf: explicitly disabled via build config 00:02:14.521 00:02:14.521 libs: 00:02:14.521 argparse: explicitly disabled via build config 00:02:14.521 metrics: explicitly disabled via build config 00:02:14.521 acl: explicitly disabled via build config 00:02:14.521 bbdev: explicitly disabled via build config 00:02:14.521 bitratestats: explicitly disabled via build config 00:02:14.521 bpf: explicitly disabled via build config 00:02:14.521 cfgfile: explicitly disabled via build config 00:02:14.521 distributor: explicitly disabled via build config 00:02:14.521 efd: explicitly disabled via build config 00:02:14.521 eventdev: explicitly disabled via build config 00:02:14.521 dispatcher: explicitly disabled via build config 00:02:14.521 gpudev: explicitly disabled via build config 00:02:14.521 gro: explicitly disabled via build config 00:02:14.521 gso: explicitly disabled via build config 00:02:14.521 ip_frag: explicitly disabled via build config 00:02:14.521 jobstats: explicitly disabled via build config 00:02:14.521 latencystats: explicitly disabled via build config 00:02:14.521 lpm: explicitly disabled via build config 00:02:14.521 member: explicitly disabled via build config 00:02:14.521 pcapng: explicitly disabled via build config 00:02:14.521 rawdev: explicitly disabled via build config 00:02:14.521 regexdev: explicitly disabled via build config 00:02:14.521 mldev: explicitly disabled via build config 00:02:14.521 rib: explicitly disabled via build config 00:02:14.521 sched: explicitly disabled via build config 00:02:14.521 stack: explicitly disabled via build config 00:02:14.521 ipsec: explicitly disabled via build config 00:02:14.521 pdcp: explicitly disabled via build config 00:02:14.521 fib: explicitly disabled via build config 00:02:14.521 port: explicitly disabled via build config 00:02:14.521 pdump: explicitly disabled via build config 00:02:14.521 table: explicitly disabled via build config 00:02:14.521 pipeline: explicitly disabled via build config 00:02:14.521 graph: explicitly disabled via build config 00:02:14.521 node: explicitly disabled via build config 00:02:14.521 00:02:14.521 drivers: 00:02:14.521 common/cpt: not in enabled drivers build config 00:02:14.521 common/dpaax: not in enabled drivers build config 00:02:14.521 common/iavf: not in enabled drivers build config 00:02:14.521 common/idpf: not in enabled drivers build config 00:02:14.521 common/ionic: not in enabled drivers build config 00:02:14.521 common/mvep: not in enabled drivers build config 00:02:14.521 common/octeontx: not in enabled drivers build config 00:02:14.521 bus/auxiliary: not in enabled drivers build config 00:02:14.521 bus/cdx: not in enabled drivers build config 00:02:14.521 bus/dpaa: not in enabled drivers build config 00:02:14.521 bus/fslmc: not in enabled drivers build config 00:02:14.521 bus/ifpga: not in enabled drivers build config 00:02:14.521 bus/platform: not in enabled drivers build config 00:02:14.521 bus/uacce: not in enabled drivers build config 00:02:14.521 bus/vmbus: not in enabled drivers build config 00:02:14.521 common/cnxk: not in enabled drivers build config 00:02:14.521 common/mlx5: not in enabled drivers build config 00:02:14.521 common/nfp: not in enabled drivers build config 00:02:14.521 common/nitrox: not in enabled drivers build config 00:02:14.521 common/qat: not in enabled drivers build config 00:02:14.521 common/sfc_efx: not in enabled drivers build config 00:02:14.521 mempool/bucket: not in enabled drivers build config 00:02:14.521 mempool/cnxk: not in enabled drivers build config 00:02:14.521 mempool/dpaa: not in enabled drivers build config 00:02:14.521 mempool/dpaa2: not in enabled drivers build config 00:02:14.521 mempool/octeontx: not in enabled drivers build config 00:02:14.521 mempool/stack: not in enabled drivers build config 00:02:14.521 dma/cnxk: not in enabled drivers build config 00:02:14.521 dma/dpaa: not in enabled drivers build config 00:02:14.521 dma/dpaa2: not in enabled drivers build config 00:02:14.521 dma/hisilicon: not in enabled drivers build config 00:02:14.521 dma/idxd: not in enabled drivers build config 00:02:14.521 dma/ioat: not in enabled drivers build config 00:02:14.521 dma/skeleton: not in enabled drivers build config 00:02:14.521 net/af_packet: not in enabled drivers build config 00:02:14.521 net/af_xdp: not in enabled drivers build config 00:02:14.521 net/ark: not in enabled drivers build config 00:02:14.521 net/atlantic: not in enabled drivers build config 00:02:14.521 net/avp: not in enabled drivers build config 00:02:14.521 net/axgbe: not in enabled drivers build config 00:02:14.521 net/bnx2x: not in enabled drivers build config 00:02:14.521 net/bnxt: not in enabled drivers build config 00:02:14.521 net/bonding: not in enabled drivers build config 00:02:14.521 net/cnxk: not in enabled drivers build config 00:02:14.521 net/cpfl: not in enabled drivers build config 00:02:14.521 net/cxgbe: not in enabled drivers build config 00:02:14.521 net/dpaa: not in enabled drivers build config 00:02:14.521 net/dpaa2: not in enabled drivers build config 00:02:14.521 net/e1000: not in enabled drivers build config 00:02:14.521 net/ena: not in enabled drivers build config 00:02:14.521 net/enetc: not in enabled drivers build config 00:02:14.521 net/enetfec: not in enabled drivers build config 00:02:14.521 net/enic: not in enabled drivers build config 00:02:14.521 net/failsafe: not in enabled drivers build config 00:02:14.521 net/fm10k: not in enabled drivers build config 00:02:14.521 net/gve: not in enabled drivers build config 00:02:14.521 net/hinic: not in enabled drivers build config 00:02:14.521 net/hns3: not in enabled drivers build config 00:02:14.521 net/i40e: not in enabled drivers build config 00:02:14.521 net/iavf: not in enabled drivers build config 00:02:14.521 net/ice: not in enabled drivers build config 00:02:14.521 net/idpf: not in enabled drivers build config 00:02:14.521 net/igc: not in enabled drivers build config 00:02:14.521 net/ionic: not in enabled drivers build config 00:02:14.521 net/ipn3ke: not in enabled drivers build config 00:02:14.521 net/ixgbe: not in enabled drivers build config 00:02:14.521 net/mana: not in enabled drivers build config 00:02:14.521 net/memif: not in enabled drivers build config 00:02:14.521 net/mlx4: not in enabled drivers build config 00:02:14.521 net/mlx5: not in enabled drivers build config 00:02:14.521 net/mvneta: not in enabled drivers build config 00:02:14.521 net/mvpp2: not in enabled drivers build config 00:02:14.521 net/netvsc: not in enabled drivers build config 00:02:14.521 net/nfb: not in enabled drivers build config 00:02:14.521 net/nfp: not in enabled drivers build config 00:02:14.521 net/ngbe: not in enabled drivers build config 00:02:14.521 net/null: not in enabled drivers build config 00:02:14.521 net/octeontx: not in enabled drivers build config 00:02:14.521 net/octeon_ep: not in enabled drivers build config 00:02:14.521 net/pcap: not in enabled drivers build config 00:02:14.521 net/pfe: not in enabled drivers build config 00:02:14.521 net/qede: not in enabled drivers build config 00:02:14.521 net/ring: not in enabled drivers build config 00:02:14.521 net/sfc: not in enabled drivers build config 00:02:14.521 net/softnic: not in enabled drivers build config 00:02:14.521 net/tap: not in enabled drivers build config 00:02:14.521 net/thunderx: not in enabled drivers build config 00:02:14.521 net/txgbe: not in enabled drivers build config 00:02:14.521 net/vdev_netvsc: not in enabled drivers build config 00:02:14.521 net/vhost: not in enabled drivers build config 00:02:14.521 net/virtio: not in enabled drivers build config 00:02:14.521 net/vmxnet3: not in enabled drivers build config 00:02:14.521 raw/*: missing internal dependency, "rawdev" 00:02:14.521 crypto/armv8: not in enabled drivers build config 00:02:14.521 crypto/bcmfs: not in enabled drivers build config 00:02:14.521 crypto/caam_jr: not in enabled drivers build config 00:02:14.521 crypto/ccp: not in enabled drivers build config 00:02:14.521 crypto/cnxk: not in enabled drivers build config 00:02:14.521 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.521 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.521 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.521 crypto/mlx5: not in enabled drivers build config 00:02:14.521 crypto/mvsam: not in enabled drivers build config 00:02:14.521 crypto/nitrox: not in enabled drivers build config 00:02:14.522 crypto/null: not in enabled drivers build config 00:02:14.522 crypto/octeontx: not in enabled drivers build config 00:02:14.522 crypto/openssl: not in enabled drivers build config 00:02:14.522 crypto/scheduler: not in enabled drivers build config 00:02:14.522 crypto/uadk: not in enabled drivers build config 00:02:14.522 crypto/virtio: not in enabled drivers build config 00:02:14.522 compress/isal: not in enabled drivers build config 00:02:14.522 compress/mlx5: not in enabled drivers build config 00:02:14.522 compress/nitrox: not in enabled drivers build config 00:02:14.522 compress/octeontx: not in enabled drivers build config 00:02:14.522 compress/zlib: not in enabled drivers build config 00:02:14.522 regex/*: missing internal dependency, "regexdev" 00:02:14.522 ml/*: missing internal dependency, "mldev" 00:02:14.522 vdpa/ifc: not in enabled drivers build config 00:02:14.522 vdpa/mlx5: not in enabled drivers build config 00:02:14.522 vdpa/nfp: not in enabled drivers build config 00:02:14.522 vdpa/sfc: not in enabled drivers build config 00:02:14.522 event/*: missing internal dependency, "eventdev" 00:02:14.522 baseband/*: missing internal dependency, "bbdev" 00:02:14.522 gpu/*: missing internal dependency, "gpudev" 00:02:14.522 00:02:14.522 00:02:14.522 Build targets in project: 84 00:02:14.522 00:02:14.522 DPDK 24.03.0 00:02:14.522 00:02:14.522 User defined options 00:02:14.522 buildtype : debug 00:02:14.522 default_library : shared 00:02:14.522 libdir : lib 00:02:14.522 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.522 b_sanitize : address 00:02:14.522 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.522 c_link_args : 00:02:14.522 cpu_instruction_set: native 00:02:14.522 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:14.522 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:14.522 enable_docs : false 00:02:14.522 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:14.522 enable_kmods : false 00:02:14.522 max_lcores : 128 00:02:14.522 tests : false 00:02:14.522 00:02:14.522 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.522 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.522 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.522 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.522 [3/267] Linking static target lib/librte_kvargs.a 00:02:14.522 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.522 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.522 [6/267] Linking static target lib/librte_log.a 00:02:14.522 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.522 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.522 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.522 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.783 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.783 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.783 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.783 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.783 [15/267] Linking static target lib/librte_telemetry.a 00:02:14.783 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.783 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.043 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.043 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.043 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.043 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.043 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.043 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.304 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.305 [25/267] Linking target lib/librte_log.so.24.1 00:02:15.305 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.305 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.305 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.305 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.305 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:15.566 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.566 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.566 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.566 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.566 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.566 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.566 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:15.566 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.566 [39/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.566 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.825 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.825 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.825 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.825 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.825 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:15.825 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.825 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.825 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.084 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.084 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.084 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.084 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.084 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.084 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.342 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.342 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.342 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.342 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.342 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.342 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.342 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.342 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.601 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.601 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.601 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.601 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.601 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.859 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.859 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.859 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.859 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.859 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.859 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.860 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.860 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.860 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.118 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.118 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.118 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.118 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.118 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.376 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.376 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.376 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.634 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.634 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.634 [87/267] Linking static target lib/librte_ring.a 00:02:17.634 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.634 [89/267] Linking static target lib/librte_rcu.a 00:02:17.634 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.634 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.634 [92/267] Linking static target lib/librte_eal.a 00:02:17.634 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.634 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.892 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.892 [96/267] Linking static target lib/librte_mempool.a 00:02:17.892 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.892 [98/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.892 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.892 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.149 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.150 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.150 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.150 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.408 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.408 [106/267] Linking static target lib/librte_meter.a 00:02:18.408 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:18.408 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.408 [109/267] Linking static target lib/librte_net.a 00:02:18.408 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.408 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.665 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.665 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.665 [114/267] Linking static target lib/librte_mbuf.a 00:02:18.666 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.666 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.666 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.924 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.924 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.924 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.184 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.184 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.506 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.506 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.506 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.506 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.506 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.506 [128/267] Linking static target lib/librte_pci.a 00:02:19.506 [129/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.506 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.506 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.506 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.506 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.506 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.765 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.765 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.765 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.765 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.765 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.765 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.765 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.765 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.765 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.765 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.023 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.023 [146/267] Linking static target lib/librte_cmdline.a 00:02:20.023 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.023 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.023 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.023 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.282 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.282 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.282 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.282 [154/267] Linking static target lib/librte_timer.a 00:02:20.282 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.282 [156/267] Linking static target lib/librte_ethdev.a 00:02:20.540 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.540 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.540 [159/267] Linking static target lib/librte_compressdev.a 00:02:20.540 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.540 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.540 [162/267] Linking static target lib/librte_hash.a 00:02:20.798 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.798 [164/267] Linking static target lib/librte_dmadev.a 00:02:20.798 [165/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.798 [166/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.798 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.798 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.056 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.056 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.056 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.056 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.056 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.056 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.315 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.315 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.315 [177/267] Linking static target lib/librte_cryptodev.a 00:02:21.315 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.315 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.315 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.315 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.574 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.574 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.574 [184/267] Linking static target lib/librte_power.a 00:02:21.574 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.832 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.832 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.832 [188/267] Linking static target lib/librte_reorder.a 00:02:21.832 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.832 [190/267] Linking static target lib/librte_security.a 00:02:21.832 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.832 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.090 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.348 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.348 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.348 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.605 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.605 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.605 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.605 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.863 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.863 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.863 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.863 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.120 [205/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.120 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.120 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.120 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.120 [209/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.120 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.378 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.378 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.378 [213/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.378 [214/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.378 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.378 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.378 [217/267] Linking static target drivers/librte_bus_vdev.a 00:02:23.378 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.378 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.378 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:23.378 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.378 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.378 [223/267] Linking static target drivers/librte_mempool_ring.a 00:02:23.378 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.635 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.635 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.892 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.830 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.090 [229/267] Linking target lib/librte_eal.so.24.1 00:02:25.090 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.090 [231/267] Linking target lib/librte_ring.so.24.1 00:02:25.090 [232/267] Linking target lib/librte_pci.so.24.1 00:02:25.090 [233/267] Linking target lib/librte_meter.so.24.1 00:02:25.090 [234/267] Linking target lib/librte_timer.so.24.1 00:02:25.090 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.090 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.090 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.349 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.349 [239/267] Linking target lib/librte_rcu.so.24.1 00:02:25.349 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.349 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.349 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.349 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:25.349 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.349 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.349 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.349 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.349 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.609 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.609 [250/267] Linking target lib/librte_net.so.24.1 00:02:25.609 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:25.609 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:25.609 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.609 [254/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.609 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.609 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.609 [257/267] Linking target lib/librte_hash.so.24.1 00:02:25.609 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:25.609 [259/267] Linking target lib/librte_ethdev.so.24.1 00:02:25.609 [260/267] Linking target lib/librte_security.so.24.1 00:02:25.868 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.868 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.868 [263/267] Linking target lib/librte_power.so.24.1 00:02:26.434 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.434 [265/267] Linking static target lib/librte_vhost.a 00:02:27.808 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.808 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:27.808 INFO: autodetecting backend as ninja 00:02:27.808 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:42.690 CC lib/log/log_flags.o 00:02:42.690 CC lib/log/log.o 00:02:42.690 CC lib/log/log_deprecated.o 00:02:42.690 CC lib/ut/ut.o 00:02:42.690 CC lib/ut_mock/mock.o 00:02:42.690 LIB libspdk_ut.a 00:02:42.690 LIB libspdk_log.a 00:02:42.690 LIB libspdk_ut_mock.a 00:02:42.690 SO libspdk_ut.so.2.0 00:02:42.690 SO libspdk_ut_mock.so.6.0 00:02:42.690 SO libspdk_log.so.7.1 00:02:42.690 SYMLINK libspdk_ut.so 00:02:42.690 SYMLINK libspdk_ut_mock.so 00:02:42.690 SYMLINK libspdk_log.so 00:02:42.690 CC lib/util/base64.o 00:02:42.690 CC lib/util/bit_array.o 00:02:42.690 CC lib/util/crc16.o 00:02:42.690 CC lib/util/cpuset.o 00:02:42.690 CC lib/util/crc32.o 00:02:42.690 CC lib/util/crc32c.o 00:02:42.690 CC lib/ioat/ioat.o 00:02:42.690 CC lib/dma/dma.o 00:02:42.690 CXX lib/trace_parser/trace.o 00:02:42.690 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.690 CC lib/util/crc32_ieee.o 00:02:42.690 CC lib/util/crc64.o 00:02:42.690 CC lib/util/dif.o 00:02:42.690 LIB libspdk_dma.a 00:02:42.690 CC lib/util/fd.o 00:02:42.690 SO libspdk_dma.so.5.0 00:02:42.690 CC lib/vfio_user/host/vfio_user.o 00:02:42.690 CC lib/util/fd_group.o 00:02:42.690 CC lib/util/file.o 00:02:42.690 SYMLINK libspdk_dma.so 00:02:42.690 CC lib/util/hexlify.o 00:02:42.690 CC lib/util/iov.o 00:02:42.690 LIB libspdk_ioat.a 00:02:42.690 CC lib/util/math.o 00:02:42.690 SO libspdk_ioat.so.7.0 00:02:42.690 CC lib/util/net.o 00:02:42.690 CC lib/util/pipe.o 00:02:42.690 SYMLINK libspdk_ioat.so 00:02:42.690 CC lib/util/strerror_tls.o 00:02:42.690 CC lib/util/string.o 00:02:42.690 CC lib/util/uuid.o 00:02:42.690 LIB libspdk_vfio_user.a 00:02:42.690 CC lib/util/xor.o 00:02:42.690 SO libspdk_vfio_user.so.5.0 00:02:42.690 CC lib/util/zipf.o 00:02:42.690 CC lib/util/md5.o 00:02:42.690 SYMLINK libspdk_vfio_user.so 00:02:42.690 LIB libspdk_util.a 00:02:42.690 SO libspdk_util.so.10.1 00:02:42.690 LIB libspdk_trace_parser.a 00:02:42.690 SYMLINK libspdk_util.so 00:02:42.690 SO libspdk_trace_parser.so.6.0 00:02:42.690 SYMLINK libspdk_trace_parser.so 00:02:42.690 CC lib/vmd/vmd.o 00:02:42.690 CC lib/vmd/led.o 00:02:42.690 CC lib/idxd/idxd.o 00:02:42.690 CC lib/idxd/idxd_user.o 00:02:42.690 CC lib/idxd/idxd_kernel.o 00:02:42.690 CC lib/rdma_utils/rdma_utils.o 00:02:42.690 CC lib/json/json_parse.o 00:02:42.690 CC lib/json/json_util.o 00:02:42.690 CC lib/conf/conf.o 00:02:42.690 CC lib/env_dpdk/env.o 00:02:42.690 CC lib/json/json_write.o 00:02:42.690 CC lib/env_dpdk/memory.o 00:02:42.690 LIB libspdk_conf.a 00:02:42.690 CC lib/env_dpdk/pci.o 00:02:42.690 SO libspdk_conf.so.6.0 00:02:42.690 CC lib/env_dpdk/init.o 00:02:42.690 CC lib/env_dpdk/threads.o 00:02:42.690 LIB libspdk_rdma_utils.a 00:02:42.690 SYMLINK libspdk_conf.so 00:02:42.690 SO libspdk_rdma_utils.so.1.0 00:02:42.690 CC lib/env_dpdk/pci_ioat.o 00:02:42.690 SYMLINK libspdk_rdma_utils.so 00:02:42.690 CC lib/env_dpdk/pci_virtio.o 00:02:42.690 CC lib/env_dpdk/pci_vmd.o 00:02:42.690 CC lib/env_dpdk/pci_idxd.o 00:02:42.690 LIB libspdk_json.a 00:02:42.690 SO libspdk_json.so.6.0 00:02:42.690 SYMLINK libspdk_json.so 00:02:42.690 CC lib/env_dpdk/pci_event.o 00:02:42.690 CC lib/env_dpdk/sigbus_handler.o 00:02:42.690 CC lib/env_dpdk/pci_dpdk.o 00:02:42.690 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:42.690 CC lib/rdma_provider/common.o 00:02:42.690 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:42.690 LIB libspdk_idxd.a 00:02:42.690 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.690 SO libspdk_idxd.so.12.1 00:02:42.690 LIB libspdk_vmd.a 00:02:42.690 SO libspdk_vmd.so.6.0 00:02:42.690 SYMLINK libspdk_idxd.so 00:02:42.690 SYMLINK libspdk_vmd.so 00:02:42.949 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.949 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.949 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.949 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.949 LIB libspdk_rdma_provider.a 00:02:42.949 SO libspdk_rdma_provider.so.7.0 00:02:42.949 SYMLINK libspdk_rdma_provider.so 00:02:43.207 LIB libspdk_jsonrpc.a 00:02:43.207 SO libspdk_jsonrpc.so.6.0 00:02:43.207 SYMLINK libspdk_jsonrpc.so 00:02:43.476 CC lib/rpc/rpc.o 00:02:43.476 LIB libspdk_env_dpdk.a 00:02:43.476 SO libspdk_env_dpdk.so.15.1 00:02:43.746 LIB libspdk_rpc.a 00:02:43.746 SO libspdk_rpc.so.6.0 00:02:43.746 SYMLINK libspdk_env_dpdk.so 00:02:43.746 SYMLINK libspdk_rpc.so 00:02:43.746 CC lib/trace/trace.o 00:02:43.746 CC lib/trace/trace_flags.o 00:02:43.746 CC lib/keyring/keyring_rpc.o 00:02:43.746 CC lib/keyring/keyring.o 00:02:43.746 CC lib/trace/trace_rpc.o 00:02:43.746 CC lib/notify/notify.o 00:02:43.746 CC lib/notify/notify_rpc.o 00:02:44.005 LIB libspdk_notify.a 00:02:44.005 SO libspdk_notify.so.6.0 00:02:44.005 LIB libspdk_keyring.a 00:02:44.005 LIB libspdk_trace.a 00:02:44.005 SYMLINK libspdk_notify.so 00:02:44.005 SO libspdk_keyring.so.2.0 00:02:44.005 SO libspdk_trace.so.11.0 00:02:44.263 SYMLINK libspdk_keyring.so 00:02:44.263 SYMLINK libspdk_trace.so 00:02:44.263 CC lib/thread/thread.o 00:02:44.263 CC lib/sock/sock.o 00:02:44.263 CC lib/thread/iobuf.o 00:02:44.263 CC lib/sock/sock_rpc.o 00:02:44.831 LIB libspdk_sock.a 00:02:44.831 SO libspdk_sock.so.10.0 00:02:44.831 SYMLINK libspdk_sock.so 00:02:45.090 CC lib/nvme/nvme_ctrlr.o 00:02:45.090 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:45.090 CC lib/nvme/nvme_fabric.o 00:02:45.090 CC lib/nvme/nvme_pcie_common.o 00:02:45.090 CC lib/nvme/nvme_ns_cmd.o 00:02:45.090 CC lib/nvme/nvme_ns.o 00:02:45.090 CC lib/nvme/nvme_qpair.o 00:02:45.090 CC lib/nvme/nvme_pcie.o 00:02:45.090 CC lib/nvme/nvme.o 00:02:45.658 CC lib/nvme/nvme_quirks.o 00:02:45.658 CC lib/nvme/nvme_transport.o 00:02:45.658 CC lib/nvme/nvme_discovery.o 00:02:45.658 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:45.917 LIB libspdk_thread.a 00:02:45.917 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:45.917 CC lib/nvme/nvme_tcp.o 00:02:45.917 SO libspdk_thread.so.11.0 00:02:45.917 CC lib/nvme/nvme_opal.o 00:02:45.917 SYMLINK libspdk_thread.so 00:02:45.917 CC lib/nvme/nvme_io_msg.o 00:02:45.917 CC lib/nvme/nvme_poll_group.o 00:02:46.176 CC lib/nvme/nvme_zns.o 00:02:46.176 CC lib/nvme/nvme_stubs.o 00:02:46.176 CC lib/nvme/nvme_auth.o 00:02:46.435 CC lib/accel/accel.o 00:02:46.435 CC lib/nvme/nvme_cuse.o 00:02:46.435 CC lib/blob/blobstore.o 00:02:46.435 CC lib/init/json_config.o 00:02:46.693 CC lib/virtio/virtio.o 00:02:46.693 CC lib/accel/accel_rpc.o 00:02:46.693 CC lib/fsdev/fsdev.o 00:02:46.693 CC lib/init/subsystem.o 00:02:46.952 CC lib/fsdev/fsdev_io.o 00:02:46.952 CC lib/virtio/virtio_vhost_user.o 00:02:46.952 CC lib/init/subsystem_rpc.o 00:02:46.952 CC lib/init/rpc.o 00:02:46.952 CC lib/fsdev/fsdev_rpc.o 00:02:47.210 CC lib/blob/request.o 00:02:47.210 LIB libspdk_init.a 00:02:47.211 CC lib/blob/zeroes.o 00:02:47.211 CC lib/blob/blob_bs_dev.o 00:02:47.211 SO libspdk_init.so.6.0 00:02:47.211 CC lib/virtio/virtio_vfio_user.o 00:02:47.211 CC lib/accel/accel_sw.o 00:02:47.211 SYMLINK libspdk_init.so 00:02:47.211 CC lib/nvme/nvme_rdma.o 00:02:47.211 CC lib/virtio/virtio_pci.o 00:02:47.469 LIB libspdk_fsdev.a 00:02:47.469 SO libspdk_fsdev.so.2.0 00:02:47.469 CC lib/event/app.o 00:02:47.469 CC lib/event/reactor.o 00:02:47.469 CC lib/event/log_rpc.o 00:02:47.469 CC lib/event/app_rpc.o 00:02:47.469 CC lib/event/scheduler_static.o 00:02:47.469 SYMLINK libspdk_fsdev.so 00:02:47.469 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:47.469 LIB libspdk_accel.a 00:02:47.727 SO libspdk_accel.so.16.0 00:02:47.727 LIB libspdk_virtio.a 00:02:47.727 SO libspdk_virtio.so.7.0 00:02:47.727 SYMLINK libspdk_accel.so 00:02:47.727 SYMLINK libspdk_virtio.so 00:02:47.727 LIB libspdk_event.a 00:02:47.727 CC lib/bdev/bdev.o 00:02:47.727 CC lib/bdev/scsi_nvme.o 00:02:47.727 CC lib/bdev/bdev_rpc.o 00:02:47.727 CC lib/bdev/part.o 00:02:47.727 CC lib/bdev/bdev_zone.o 00:02:47.986 SO libspdk_event.so.14.0 00:02:47.986 SYMLINK libspdk_event.so 00:02:48.244 LIB libspdk_fuse_dispatcher.a 00:02:48.244 SO libspdk_fuse_dispatcher.so.1.0 00:02:48.244 SYMLINK libspdk_fuse_dispatcher.so 00:02:48.244 LIB libspdk_nvme.a 00:02:48.503 SO libspdk_nvme.so.15.0 00:02:48.761 SYMLINK libspdk_nvme.so 00:02:49.695 LIB libspdk_blob.a 00:02:49.695 SO libspdk_blob.so.11.0 00:02:49.695 SYMLINK libspdk_blob.so 00:02:49.954 CC lib/blobfs/tree.o 00:02:49.954 CC lib/blobfs/blobfs.o 00:02:49.954 CC lib/lvol/lvol.o 00:02:50.521 LIB libspdk_bdev.a 00:02:50.521 SO libspdk_bdev.so.17.0 00:02:50.779 SYMLINK libspdk_bdev.so 00:02:50.779 LIB libspdk_blobfs.a 00:02:50.779 SO libspdk_blobfs.so.10.0 00:02:50.779 CC lib/scsi/dev.o 00:02:50.779 CC lib/nvmf/ctrlr.o 00:02:50.779 CC lib/scsi/lun.o 00:02:50.779 CC lib/scsi/port.o 00:02:50.779 CC lib/nvmf/ctrlr_discovery.o 00:02:50.779 CC lib/ublk/ublk.o 00:02:50.779 CC lib/nbd/nbd.o 00:02:50.779 CC lib/ftl/ftl_core.o 00:02:50.779 SYMLINK libspdk_blobfs.so 00:02:50.779 CC lib/ftl/ftl_init.o 00:02:51.038 LIB libspdk_lvol.a 00:02:51.038 SO libspdk_lvol.so.10.0 00:02:51.038 CC lib/ftl/ftl_layout.o 00:02:51.038 SYMLINK libspdk_lvol.so 00:02:51.038 CC lib/ftl/ftl_debug.o 00:02:51.038 CC lib/nbd/nbd_rpc.o 00:02:51.038 CC lib/nvmf/ctrlr_bdev.o 00:02:51.038 CC lib/scsi/scsi.o 00:02:51.297 CC lib/nvmf/subsystem.o 00:02:51.297 CC lib/nvmf/nvmf.o 00:02:51.297 CC lib/ublk/ublk_rpc.o 00:02:51.297 CC lib/nvmf/nvmf_rpc.o 00:02:51.297 LIB libspdk_nbd.a 00:02:51.297 CC lib/scsi/scsi_bdev.o 00:02:51.297 SO libspdk_nbd.so.7.0 00:02:51.297 CC lib/ftl/ftl_io.o 00:02:51.297 SYMLINK libspdk_nbd.so 00:02:51.297 CC lib/ftl/ftl_sb.o 00:02:51.297 CC lib/ftl/ftl_l2p.o 00:02:51.556 LIB libspdk_ublk.a 00:02:51.556 CC lib/ftl/ftl_l2p_flat.o 00:02:51.556 SO libspdk_ublk.so.3.0 00:02:51.556 CC lib/nvmf/transport.o 00:02:51.556 CC lib/nvmf/tcp.o 00:02:51.556 SYMLINK libspdk_ublk.so 00:02:51.556 CC lib/nvmf/stubs.o 00:02:51.814 CC lib/ftl/ftl_nv_cache.o 00:02:51.814 CC lib/nvmf/mdns_server.o 00:02:51.814 CC lib/scsi/scsi_pr.o 00:02:52.071 CC lib/nvmf/rdma.o 00:02:52.071 CC lib/scsi/scsi_rpc.o 00:02:52.071 CC lib/nvmf/auth.o 00:02:52.071 CC lib/ftl/ftl_band.o 00:02:52.071 CC lib/scsi/task.o 00:02:52.071 CC lib/ftl/ftl_band_ops.o 00:02:52.071 CC lib/ftl/ftl_writer.o 00:02:52.331 CC lib/ftl/ftl_rq.o 00:02:52.331 CC lib/ftl/ftl_reloc.o 00:02:52.331 LIB libspdk_scsi.a 00:02:52.331 SO libspdk_scsi.so.9.0 00:02:52.331 CC lib/ftl/ftl_l2p_cache.o 00:02:52.331 CC lib/ftl/ftl_p2l.o 00:02:52.331 SYMLINK libspdk_scsi.so 00:02:52.331 CC lib/ftl/ftl_p2l_log.o 00:02:52.331 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.331 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.592 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.852 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.852 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.852 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.852 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.852 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.852 CC lib/ftl/utils/ftl_conf.o 00:02:52.852 CC lib/ftl/utils/ftl_md.o 00:02:52.852 CC lib/ftl/utils/ftl_mempool.o 00:02:52.852 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.852 CC lib/ftl/utils/ftl_property.o 00:02:52.852 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.111 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.111 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.111 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.111 CC lib/iscsi/conn.o 00:02:53.111 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.111 CC lib/iscsi/init_grp.o 00:02:53.369 CC lib/iscsi/iscsi.o 00:02:53.369 CC lib/iscsi/param.o 00:02:53.369 CC lib/iscsi/portal_grp.o 00:02:53.369 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.369 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.369 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.369 CC lib/iscsi/tgt_node.o 00:02:53.369 CC lib/ftl/base/ftl_base_dev.o 00:02:53.627 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.627 CC lib/iscsi/iscsi_subsystem.o 00:02:53.627 CC lib/ftl/ftl_trace.o 00:02:53.627 CC lib/iscsi/iscsi_rpc.o 00:02:53.627 CC lib/iscsi/task.o 00:02:53.627 CC lib/vhost/vhost.o 00:02:53.627 CC lib/vhost/vhost_rpc.o 00:02:53.627 CC lib/vhost/vhost_scsi.o 00:02:53.627 CC lib/vhost/vhost_blk.o 00:02:53.627 LIB libspdk_ftl.a 00:02:53.885 CC lib/vhost/rte_vhost_user.o 00:02:53.885 SO libspdk_ftl.so.9.0 00:02:54.145 LIB libspdk_nvmf.a 00:02:54.145 SYMLINK libspdk_ftl.so 00:02:54.145 SO libspdk_nvmf.so.20.0 00:02:54.404 LIB libspdk_iscsi.a 00:02:54.404 SYMLINK libspdk_nvmf.so 00:02:54.404 SO libspdk_iscsi.so.8.0 00:02:54.663 SYMLINK libspdk_iscsi.so 00:02:54.663 LIB libspdk_vhost.a 00:02:54.663 SO libspdk_vhost.so.8.0 00:02:54.663 SYMLINK libspdk_vhost.so 00:02:54.921 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.921 CC module/fsdev/aio/fsdev_aio.o 00:02:54.921 CC module/accel/ioat/accel_ioat.o 00:02:54.921 CC module/keyring/linux/keyring.o 00:02:54.921 CC module/sock/posix/posix.o 00:02:54.921 CC module/keyring/file/keyring.o 00:02:54.921 CC module/blob/bdev/blob_bdev.o 00:02:54.921 CC module/accel/dsa/accel_dsa.o 00:02:55.181 CC module/accel/error/accel_error.o 00:02:55.182 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.182 LIB libspdk_env_dpdk_rpc.a 00:02:55.182 SO libspdk_env_dpdk_rpc.so.6.0 00:02:55.182 CC module/keyring/linux/keyring_rpc.o 00:02:55.182 CC module/keyring/file/keyring_rpc.o 00:02:55.182 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.182 CC module/accel/error/accel_error_rpc.o 00:02:55.182 CC module/accel/dsa/accel_dsa_rpc.o 00:02:55.182 CC module/accel/ioat/accel_ioat_rpc.o 00:02:55.182 LIB libspdk_keyring_linux.a 00:02:55.182 SO libspdk_keyring_linux.so.1.0 00:02:55.182 LIB libspdk_scheduler_dynamic.a 00:02:55.182 SO libspdk_scheduler_dynamic.so.4.0 00:02:55.182 LIB libspdk_keyring_file.a 00:02:55.182 SYMLINK libspdk_keyring_linux.so 00:02:55.182 LIB libspdk_blob_bdev.a 00:02:55.182 LIB libspdk_accel_error.a 00:02:55.443 LIB libspdk_accel_ioat.a 00:02:55.443 SO libspdk_blob_bdev.so.11.0 00:02:55.443 SYMLINK libspdk_scheduler_dynamic.so 00:02:55.443 SO libspdk_keyring_file.so.2.0 00:02:55.443 LIB libspdk_accel_dsa.a 00:02:55.443 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:55.443 SO libspdk_accel_error.so.2.0 00:02:55.443 SO libspdk_accel_ioat.so.6.0 00:02:55.443 SO libspdk_accel_dsa.so.5.0 00:02:55.443 SYMLINK libspdk_blob_bdev.so 00:02:55.443 SYMLINK libspdk_keyring_file.so 00:02:55.443 SYMLINK libspdk_accel_error.so 00:02:55.443 CC module/fsdev/aio/linux_aio_mgr.o 00:02:55.443 SYMLINK libspdk_accel_dsa.so 00:02:55.443 SYMLINK libspdk_accel_ioat.so 00:02:55.443 CC module/accel/iaa/accel_iaa.o 00:02:55.443 CC module/accel/iaa/accel_iaa_rpc.o 00:02:55.444 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.444 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.444 CC module/bdev/delay/vbdev_delay.o 00:02:55.705 CC module/bdev/error/vbdev_error.o 00:02:55.705 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.705 LIB libspdk_accel_iaa.a 00:02:55.705 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.705 CC module/bdev/gpt/gpt.o 00:02:55.705 SO libspdk_accel_iaa.so.3.0 00:02:55.705 LIB libspdk_scheduler_gscheduler.a 00:02:55.705 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.705 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:55.705 SO libspdk_scheduler_gscheduler.so.4.0 00:02:55.705 SYMLINK libspdk_accel_iaa.so 00:02:55.705 SYMLINK libspdk_scheduler_gscheduler.so 00:02:55.705 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.705 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.705 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:55.705 LIB libspdk_fsdev_aio.a 00:02:55.705 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.705 SO libspdk_fsdev_aio.so.1.0 00:02:55.705 LIB libspdk_sock_posix.a 00:02:55.705 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.705 SO libspdk_sock_posix.so.6.0 00:02:55.705 SYMLINK libspdk_fsdev_aio.so 00:02:55.705 LIB libspdk_bdev_error.a 00:02:55.705 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.965 SO libspdk_bdev_error.so.6.0 00:02:55.965 SYMLINK libspdk_sock_posix.so 00:02:55.965 LIB libspdk_bdev_delay.a 00:02:55.965 LIB libspdk_blobfs_bdev.a 00:02:55.965 SYMLINK libspdk_bdev_error.so 00:02:55.965 SO libspdk_bdev_delay.so.6.0 00:02:55.965 CC module/bdev/malloc/bdev_malloc.o 00:02:55.965 SO libspdk_blobfs_bdev.so.6.0 00:02:55.965 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.965 SYMLINK libspdk_bdev_delay.so 00:02:55.965 CC module/bdev/null/bdev_null.o 00:02:55.965 LIB libspdk_bdev_gpt.a 00:02:55.965 CC module/bdev/null/bdev_null_rpc.o 00:02:55.965 SYMLINK libspdk_blobfs_bdev.so 00:02:55.965 CC module/bdev/nvme/bdev_nvme.o 00:02:55.965 SO libspdk_bdev_gpt.so.6.0 00:02:55.965 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.965 SYMLINK libspdk_bdev_gpt.so 00:02:56.224 CC module/bdev/raid/bdev_raid.o 00:02:56.224 LIB libspdk_bdev_lvol.a 00:02:56.224 SO libspdk_bdev_lvol.so.6.0 00:02:56.224 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.224 CC module/bdev/split/vbdev_split.o 00:02:56.224 LIB libspdk_bdev_null.a 00:02:56.224 CC module/bdev/xnvme/bdev_xnvme.o 00:02:56.224 LIB libspdk_bdev_malloc.a 00:02:56.224 SYMLINK libspdk_bdev_lvol.so 00:02:56.224 CC module/bdev/aio/bdev_aio.o 00:02:56.224 SO libspdk_bdev_null.so.6.0 00:02:56.224 SO libspdk_bdev_malloc.so.6.0 00:02:56.224 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.224 SYMLINK libspdk_bdev_null.so 00:02:56.224 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.224 SYMLINK libspdk_bdev_malloc.so 00:02:56.224 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.224 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.482 CC module/bdev/ftl/bdev_ftl.o 00:02:56.482 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.482 LIB libspdk_bdev_split.a 00:02:56.482 LIB libspdk_bdev_passthru.a 00:02:56.482 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.482 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:56.482 SO libspdk_bdev_split.so.6.0 00:02:56.482 LIB libspdk_bdev_aio.a 00:02:56.482 SO libspdk_bdev_passthru.so.6.0 00:02:56.482 SO libspdk_bdev_aio.so.6.0 00:02:56.482 SYMLINK libspdk_bdev_split.so 00:02:56.482 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.482 LIB libspdk_bdev_zone_block.a 00:02:56.482 SYMLINK libspdk_bdev_passthru.so 00:02:56.482 CC module/bdev/nvme/nvme_rpc.o 00:02:56.482 SO libspdk_bdev_zone_block.so.6.0 00:02:56.482 SYMLINK libspdk_bdev_aio.so 00:02:56.482 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.482 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.482 LIB libspdk_bdev_xnvme.a 00:02:56.740 SO libspdk_bdev_xnvme.so.3.0 00:02:56.740 SYMLINK libspdk_bdev_zone_block.so 00:02:56.740 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.740 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.740 SYMLINK libspdk_bdev_xnvme.so 00:02:56.740 CC module/bdev/nvme/vbdev_opal.o 00:02:56.740 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.740 LIB libspdk_bdev_iscsi.a 00:02:56.740 LIB libspdk_bdev_ftl.a 00:02:56.740 SO libspdk_bdev_iscsi.so.6.0 00:02:56.740 SO libspdk_bdev_ftl.so.6.0 00:02:56.740 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.740 SYMLINK libspdk_bdev_ftl.so 00:02:56.740 CC module/bdev/raid/raid0.o 00:02:56.740 CC module/bdev/raid/raid1.o 00:02:56.740 SYMLINK libspdk_bdev_iscsi.so 00:02:56.740 CC module/bdev/raid/concat.o 00:02:56.740 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:56.740 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:56.740 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:56.997 LIB libspdk_bdev_raid.a 00:02:56.997 SO libspdk_bdev_raid.so.6.0 00:02:56.997 SYMLINK libspdk_bdev_raid.so 00:02:57.255 LIB libspdk_bdev_virtio.a 00:02:57.255 SO libspdk_bdev_virtio.so.6.0 00:02:57.255 SYMLINK libspdk_bdev_virtio.so 00:02:58.190 LIB libspdk_bdev_nvme.a 00:02:58.190 SO libspdk_bdev_nvme.so.7.1 00:02:58.190 SYMLINK libspdk_bdev_nvme.so 00:02:58.448 CC module/event/subsystems/sock/sock.o 00:02:58.707 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.707 CC module/event/subsystems/vmd/vmd.o 00:02:58.707 CC module/event/subsystems/keyring/keyring.o 00:02:58.707 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.707 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.707 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.707 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.707 CC module/event/subsystems/fsdev/fsdev.o 00:02:58.707 LIB libspdk_event_keyring.a 00:02:58.707 LIB libspdk_event_sock.a 00:02:58.707 LIB libspdk_event_vhost_blk.a 00:02:58.707 LIB libspdk_event_fsdev.a 00:02:58.707 SO libspdk_event_keyring.so.1.0 00:02:58.707 LIB libspdk_event_vmd.a 00:02:58.707 SO libspdk_event_sock.so.5.0 00:02:58.707 SO libspdk_event_fsdev.so.1.0 00:02:58.707 LIB libspdk_event_scheduler.a 00:02:58.707 LIB libspdk_event_iobuf.a 00:02:58.707 SO libspdk_event_vhost_blk.so.3.0 00:02:58.707 SO libspdk_event_vmd.so.6.0 00:02:58.707 SO libspdk_event_scheduler.so.4.0 00:02:58.707 SYMLINK libspdk_event_keyring.so 00:02:58.707 SO libspdk_event_iobuf.so.3.0 00:02:58.707 SYMLINK libspdk_event_fsdev.so 00:02:58.707 SYMLINK libspdk_event_sock.so 00:02:58.707 SYMLINK libspdk_event_vhost_blk.so 00:02:58.707 SYMLINK libspdk_event_scheduler.so 00:02:58.707 SYMLINK libspdk_event_iobuf.so 00:02:58.707 SYMLINK libspdk_event_vmd.so 00:02:58.973 CC module/event/subsystems/accel/accel.o 00:02:58.973 LIB libspdk_event_accel.a 00:02:59.272 SO libspdk_event_accel.so.6.0 00:02:59.272 SYMLINK libspdk_event_accel.so 00:02:59.272 CC module/event/subsystems/bdev/bdev.o 00:02:59.529 LIB libspdk_event_bdev.a 00:02:59.529 SO libspdk_event_bdev.so.6.0 00:02:59.529 SYMLINK libspdk_event_bdev.so 00:02:59.787 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.787 CC module/event/subsystems/ublk/ublk.o 00:02:59.787 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.787 CC module/event/subsystems/nbd/nbd.o 00:02:59.787 CC module/event/subsystems/scsi/scsi.o 00:02:59.787 LIB libspdk_event_scsi.a 00:02:59.787 LIB libspdk_event_ublk.a 00:02:59.787 LIB libspdk_event_nbd.a 00:02:59.787 SO libspdk_event_scsi.so.6.0 00:02:59.787 SO libspdk_event_ublk.so.3.0 00:02:59.787 SO libspdk_event_nbd.so.6.0 00:02:59.787 SYMLINK libspdk_event_scsi.so 00:02:59.787 SYMLINK libspdk_event_ublk.so 00:03:00.045 SYMLINK libspdk_event_nbd.so 00:03:00.045 LIB libspdk_event_nvmf.a 00:03:00.045 SO libspdk_event_nvmf.so.6.0 00:03:00.045 SYMLINK libspdk_event_nvmf.so 00:03:00.045 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.045 CC module/event/subsystems/iscsi/iscsi.o 00:03:00.302 LIB libspdk_event_vhost_scsi.a 00:03:00.302 SO libspdk_event_vhost_scsi.so.3.0 00:03:00.302 LIB libspdk_event_iscsi.a 00:03:00.302 SO libspdk_event_iscsi.so.6.0 00:03:00.302 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.302 SYMLINK libspdk_event_iscsi.so 00:03:00.302 SO libspdk.so.6.0 00:03:00.302 SYMLINK libspdk.so 00:03:00.560 CXX app/trace/trace.o 00:03:00.560 CC app/trace_record/trace_record.o 00:03:00.560 TEST_HEADER include/spdk/accel.h 00:03:00.560 TEST_HEADER include/spdk/accel_module.h 00:03:00.560 TEST_HEADER include/spdk/assert.h 00:03:00.560 CC test/rpc_client/rpc_client_test.o 00:03:00.560 TEST_HEADER include/spdk/barrier.h 00:03:00.560 TEST_HEADER include/spdk/base64.h 00:03:00.560 TEST_HEADER include/spdk/bdev.h 00:03:00.560 TEST_HEADER include/spdk/bdev_module.h 00:03:00.560 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.560 TEST_HEADER include/spdk/bit_array.h 00:03:00.560 TEST_HEADER include/spdk/bit_pool.h 00:03:00.560 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.560 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.560 TEST_HEADER include/spdk/blobfs.h 00:03:00.560 TEST_HEADER include/spdk/blob.h 00:03:00.560 TEST_HEADER include/spdk/conf.h 00:03:00.560 TEST_HEADER include/spdk/config.h 00:03:00.560 TEST_HEADER include/spdk/cpuset.h 00:03:00.560 TEST_HEADER include/spdk/crc16.h 00:03:00.560 TEST_HEADER include/spdk/crc32.h 00:03:00.560 TEST_HEADER include/spdk/crc64.h 00:03:00.560 TEST_HEADER include/spdk/dif.h 00:03:00.560 TEST_HEADER include/spdk/dma.h 00:03:00.560 TEST_HEADER include/spdk/endian.h 00:03:00.560 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.560 TEST_HEADER include/spdk/env.h 00:03:00.560 TEST_HEADER include/spdk/event.h 00:03:00.560 TEST_HEADER include/spdk/fd_group.h 00:03:00.560 CC test/thread/poller_perf/poller_perf.o 00:03:00.560 TEST_HEADER include/spdk/fd.h 00:03:00.560 CC examples/ioat/perf/perf.o 00:03:00.560 TEST_HEADER include/spdk/file.h 00:03:00.560 TEST_HEADER include/spdk/fsdev.h 00:03:00.560 CC examples/util/zipf/zipf.o 00:03:00.560 TEST_HEADER include/spdk/fsdev_module.h 00:03:00.560 CC test/app/bdev_svc/bdev_svc.o 00:03:00.560 TEST_HEADER include/spdk/ftl.h 00:03:00.560 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:00.560 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.560 TEST_HEADER include/spdk/hexlify.h 00:03:00.560 TEST_HEADER include/spdk/histogram_data.h 00:03:00.817 TEST_HEADER include/spdk/idxd.h 00:03:00.817 CC test/dma/test_dma/test_dma.o 00:03:00.817 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.817 TEST_HEADER include/spdk/init.h 00:03:00.817 TEST_HEADER include/spdk/ioat.h 00:03:00.817 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.817 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.817 TEST_HEADER include/spdk/json.h 00:03:00.817 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.817 TEST_HEADER include/spdk/keyring.h 00:03:00.817 TEST_HEADER include/spdk/keyring_module.h 00:03:00.817 TEST_HEADER include/spdk/likely.h 00:03:00.817 TEST_HEADER include/spdk/log.h 00:03:00.817 TEST_HEADER include/spdk/lvol.h 00:03:00.817 TEST_HEADER include/spdk/md5.h 00:03:00.817 TEST_HEADER include/spdk/memory.h 00:03:00.817 TEST_HEADER include/spdk/mmio.h 00:03:00.817 TEST_HEADER include/spdk/nbd.h 00:03:00.817 TEST_HEADER include/spdk/net.h 00:03:00.817 TEST_HEADER include/spdk/notify.h 00:03:00.817 TEST_HEADER include/spdk/nvme.h 00:03:00.817 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.817 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.817 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.817 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.817 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.817 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.817 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.817 LINK rpc_client_test 00:03:00.817 TEST_HEADER include/spdk/nvmf.h 00:03:00.817 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.817 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.817 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.817 TEST_HEADER include/spdk/opal.h 00:03:00.817 TEST_HEADER include/spdk/opal_spec.h 00:03:00.817 TEST_HEADER include/spdk/pci_ids.h 00:03:00.817 TEST_HEADER include/spdk/pipe.h 00:03:00.817 TEST_HEADER include/spdk/queue.h 00:03:00.817 TEST_HEADER include/spdk/reduce.h 00:03:00.817 TEST_HEADER include/spdk/rpc.h 00:03:00.817 TEST_HEADER include/spdk/scheduler.h 00:03:00.817 TEST_HEADER include/spdk/scsi.h 00:03:00.817 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.817 TEST_HEADER include/spdk/sock.h 00:03:00.817 TEST_HEADER include/spdk/stdinc.h 00:03:00.817 TEST_HEADER include/spdk/string.h 00:03:00.817 TEST_HEADER include/spdk/thread.h 00:03:00.817 TEST_HEADER include/spdk/trace.h 00:03:00.817 TEST_HEADER include/spdk/trace_parser.h 00:03:00.817 TEST_HEADER include/spdk/tree.h 00:03:00.817 TEST_HEADER include/spdk/ublk.h 00:03:00.817 LINK poller_perf 00:03:00.817 TEST_HEADER include/spdk/util.h 00:03:00.817 TEST_HEADER include/spdk/uuid.h 00:03:00.817 TEST_HEADER include/spdk/version.h 00:03:00.817 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.817 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.817 LINK ioat_perf 00:03:00.817 TEST_HEADER include/spdk/vhost.h 00:03:00.817 TEST_HEADER include/spdk/vmd.h 00:03:00.817 TEST_HEADER include/spdk/xor.h 00:03:00.817 TEST_HEADER include/spdk/zipf.h 00:03:00.817 CXX test/cpp_headers/accel.o 00:03:00.817 LINK zipf 00:03:00.817 LINK spdk_trace_record 00:03:00.817 LINK bdev_svc 00:03:00.817 CXX test/cpp_headers/accel_module.o 00:03:00.817 CXX test/cpp_headers/assert.o 00:03:00.817 LINK spdk_trace 00:03:01.076 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:01.076 CXX test/cpp_headers/barrier.o 00:03:01.076 CC examples/ioat/verify/verify.o 00:03:01.076 CXX test/cpp_headers/base64.o 00:03:01.076 CXX test/cpp_headers/bdev.o 00:03:01.076 CC test/event/event_perf/event_perf.o 00:03:01.076 LINK interrupt_tgt 00:03:01.076 CXX test/cpp_headers/bdev_module.o 00:03:01.076 LINK mem_callbacks 00:03:01.076 LINK verify 00:03:01.076 LINK test_dma 00:03:01.077 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:01.077 CC app/nvmf_tgt/nvmf_main.o 00:03:01.335 LINK event_perf 00:03:01.335 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:01.335 CC test/env/vtophys/vtophys.o 00:03:01.335 CC test/app/histogram_perf/histogram_perf.o 00:03:01.335 CXX test/cpp_headers/bdev_zone.o 00:03:01.335 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:01.335 CXX test/cpp_headers/bit_array.o 00:03:01.335 LINK nvmf_tgt 00:03:01.335 LINK env_dpdk_post_init 00:03:01.335 LINK histogram_perf 00:03:01.335 LINK vtophys 00:03:01.335 CC test/event/reactor/reactor.o 00:03:01.335 CC examples/thread/thread/thread_ex.o 00:03:01.335 CXX test/cpp_headers/bit_pool.o 00:03:01.593 CXX test/cpp_headers/blob_bdev.o 00:03:01.593 LINK nvme_fuzz 00:03:01.593 LINK reactor 00:03:01.593 CC test/env/memory/memory_ut.o 00:03:01.593 CC test/accel/dif/dif.o 00:03:01.593 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.593 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.593 CC app/spdk_lspci/spdk_lspci.o 00:03:01.593 CC test/blobfs/mkfs/mkfs.o 00:03:01.593 LINK thread 00:03:01.593 CC test/event/reactor_perf/reactor_perf.o 00:03:01.593 CC app/spdk_tgt/spdk_tgt.o 00:03:01.851 CXX test/cpp_headers/blobfs.o 00:03:01.851 LINK iscsi_tgt 00:03:01.851 LINK spdk_lspci 00:03:01.851 LINK reactor_perf 00:03:01.851 LINK mkfs 00:03:01.851 LINK spdk_tgt 00:03:01.851 CXX test/cpp_headers/blob.o 00:03:01.851 CC test/event/app_repeat/app_repeat.o 00:03:02.110 CC examples/sock/hello_world/hello_sock.o 00:03:02.110 CC test/env/pci/pci_ut.o 00:03:02.110 CC test/event/scheduler/scheduler.o 00:03:02.110 CXX test/cpp_headers/conf.o 00:03:02.110 LINK app_repeat 00:03:02.110 CC app/spdk_nvme_perf/perf.o 00:03:02.110 LINK dif 00:03:02.110 CXX test/cpp_headers/config.o 00:03:02.110 CC test/lvol/esnap/esnap.o 00:03:02.110 LINK hello_sock 00:03:02.110 CXX test/cpp_headers/cpuset.o 00:03:02.110 LINK scheduler 00:03:02.110 CXX test/cpp_headers/crc16.o 00:03:02.369 CC app/spdk_nvme_identify/identify.o 00:03:02.369 CXX test/cpp_headers/crc32.o 00:03:02.369 CXX test/cpp_headers/crc64.o 00:03:02.369 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.369 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.369 LINK pci_ut 00:03:02.369 LINK lsvmd 00:03:02.369 CXX test/cpp_headers/dif.o 00:03:02.627 LINK spdk_nvme_discover 00:03:02.627 CC app/spdk_top/spdk_top.o 00:03:02.627 CXX test/cpp_headers/dma.o 00:03:02.627 LINK memory_ut 00:03:02.627 CC examples/vmd/led/led.o 00:03:02.627 CXX test/cpp_headers/endian.o 00:03:02.627 LINK led 00:03:02.885 LINK spdk_nvme_perf 00:03:02.885 CXX test/cpp_headers/env_dpdk.o 00:03:02.885 CC test/nvme/aer/aer.o 00:03:02.885 CC app/vhost/vhost.o 00:03:02.885 CC app/spdk_dd/spdk_dd.o 00:03:02.885 LINK spdk_nvme_identify 00:03:02.885 CXX test/cpp_headers/env.o 00:03:02.885 LINK iscsi_fuzz 00:03:02.885 CC examples/idxd/perf/perf.o 00:03:02.885 LINK vhost 00:03:03.144 LINK aer 00:03:03.144 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:03.144 CXX test/cpp_headers/event.o 00:03:03.144 LINK spdk_dd 00:03:03.144 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.144 CC examples/accel/perf/accel_perf.o 00:03:03.144 CC test/nvme/reset/reset.o 00:03:03.144 CXX test/cpp_headers/fd_group.o 00:03:03.144 LINK spdk_top 00:03:03.144 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.403 LINK idxd_perf 00:03:03.403 CC examples/blob/hello_world/hello_blob.o 00:03:03.403 LINK hello_fsdev 00:03:03.403 CXX test/cpp_headers/fd.o 00:03:03.403 CC examples/blob/cli/blobcli.o 00:03:03.403 CC app/fio/nvme/fio_plugin.o 00:03:03.403 LINK reset 00:03:03.403 CXX test/cpp_headers/file.o 00:03:03.403 LINK hello_blob 00:03:03.403 CC app/fio/bdev/fio_plugin.o 00:03:03.662 CC examples/nvme/hello_world/hello_world.o 00:03:03.662 LINK accel_perf 00:03:03.662 LINK vhost_fuzz 00:03:03.662 CXX test/cpp_headers/fsdev.o 00:03:03.662 CC test/nvme/sgl/sgl.o 00:03:03.662 CC test/nvme/e2edp/nvme_dp.o 00:03:03.662 CXX test/cpp_headers/fsdev_module.o 00:03:03.662 LINK blobcli 00:03:03.662 LINK hello_world 00:03:03.662 CC test/app/jsoncat/jsoncat.o 00:03:03.920 CXX test/cpp_headers/ftl.o 00:03:03.920 LINK nvme_dp 00:03:03.920 LINK sgl 00:03:03.920 LINK spdk_bdev 00:03:03.920 LINK jsoncat 00:03:03.920 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.920 CC examples/nvme/reconnect/reconnect.o 00:03:03.920 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.920 CXX test/cpp_headers/fuse_dispatcher.o 00:03:03.920 LINK spdk_nvme 00:03:04.179 CC test/app/stub/stub.o 00:03:04.179 CC examples/nvme/arbitration/arbitration.o 00:03:04.179 CXX test/cpp_headers/gpt_spec.o 00:03:04.179 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.179 CC test/nvme/overhead/overhead.o 00:03:04.179 CC test/nvme/err_injection/err_injection.o 00:03:04.179 LINK hello_bdev 00:03:04.179 LINK stub 00:03:04.179 CXX test/cpp_headers/hexlify.o 00:03:04.179 LINK err_injection 00:03:04.179 LINK reconnect 00:03:04.436 CXX test/cpp_headers/histogram_data.o 00:03:04.436 LINK overhead 00:03:04.436 CC examples/nvme/hotplug/hotplug.o 00:03:04.437 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.437 LINK arbitration 00:03:04.437 CC examples/nvme/abort/abort.o 00:03:04.437 CXX test/cpp_headers/idxd.o 00:03:04.437 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.437 LINK nvme_manage 00:03:04.437 LINK cmb_copy 00:03:04.437 CC test/nvme/startup/startup.o 00:03:04.437 LINK bdevperf 00:03:04.694 CXX test/cpp_headers/idxd_spec.o 00:03:04.694 LINK hotplug 00:03:04.694 CC test/nvme/reserve/reserve.o 00:03:04.694 LINK pmr_persistence 00:03:04.694 CC test/nvme/simple_copy/simple_copy.o 00:03:04.694 LINK startup 00:03:04.694 CC test/nvme/connect_stress/connect_stress.o 00:03:04.694 CXX test/cpp_headers/init.o 00:03:04.694 LINK abort 00:03:04.694 CC test/nvme/boot_partition/boot_partition.o 00:03:04.694 CC test/nvme/compliance/nvme_compliance.o 00:03:04.694 CC test/nvme/fused_ordering/fused_ordering.o 00:03:04.694 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:04.694 LINK reserve 00:03:04.952 LINK connect_stress 00:03:04.953 LINK boot_partition 00:03:04.953 LINK simple_copy 00:03:04.953 CXX test/cpp_headers/ioat.o 00:03:04.953 CXX test/cpp_headers/ioat_spec.o 00:03:04.953 LINK fused_ordering 00:03:04.953 LINK doorbell_aers 00:03:04.953 CXX test/cpp_headers/iscsi_spec.o 00:03:04.953 CC examples/nvmf/nvmf/nvmf.o 00:03:04.953 CXX test/cpp_headers/json.o 00:03:04.953 CXX test/cpp_headers/jsonrpc.o 00:03:04.953 CC test/nvme/cuse/cuse.o 00:03:04.953 CC test/nvme/fdp/fdp.o 00:03:04.953 LINK nvme_compliance 00:03:04.953 CXX test/cpp_headers/keyring.o 00:03:05.211 CXX test/cpp_headers/keyring_module.o 00:03:05.211 CXX test/cpp_headers/likely.o 00:03:05.211 CXX test/cpp_headers/log.o 00:03:05.211 CXX test/cpp_headers/lvol.o 00:03:05.211 CC test/bdev/bdevio/bdevio.o 00:03:05.211 CXX test/cpp_headers/md5.o 00:03:05.211 CXX test/cpp_headers/memory.o 00:03:05.211 LINK nvmf 00:03:05.211 CXX test/cpp_headers/mmio.o 00:03:05.211 LINK fdp 00:03:05.211 CXX test/cpp_headers/nbd.o 00:03:05.211 CXX test/cpp_headers/net.o 00:03:05.211 CXX test/cpp_headers/notify.o 00:03:05.469 CXX test/cpp_headers/nvme.o 00:03:05.469 CXX test/cpp_headers/nvme_intel.o 00:03:05.469 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.469 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.469 CXX test/cpp_headers/nvme_spec.o 00:03:05.469 CXX test/cpp_headers/nvme_zns.o 00:03:05.469 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.469 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.469 LINK bdevio 00:03:05.469 CXX test/cpp_headers/nvmf.o 00:03:05.469 CXX test/cpp_headers/nvmf_spec.o 00:03:05.469 CXX test/cpp_headers/nvmf_transport.o 00:03:05.469 CXX test/cpp_headers/opal.o 00:03:05.469 CXX test/cpp_headers/opal_spec.o 00:03:05.469 CXX test/cpp_headers/pci_ids.o 00:03:05.727 CXX test/cpp_headers/pipe.o 00:03:05.727 CXX test/cpp_headers/queue.o 00:03:05.727 CXX test/cpp_headers/reduce.o 00:03:05.727 CXX test/cpp_headers/rpc.o 00:03:05.727 CXX test/cpp_headers/scheduler.o 00:03:05.727 CXX test/cpp_headers/scsi.o 00:03:05.727 CXX test/cpp_headers/scsi_spec.o 00:03:05.727 CXX test/cpp_headers/sock.o 00:03:05.727 CXX test/cpp_headers/stdinc.o 00:03:05.727 CXX test/cpp_headers/string.o 00:03:05.727 CXX test/cpp_headers/thread.o 00:03:05.727 CXX test/cpp_headers/trace.o 00:03:05.727 CXX test/cpp_headers/trace_parser.o 00:03:05.727 CXX test/cpp_headers/tree.o 00:03:05.727 CXX test/cpp_headers/ublk.o 00:03:05.727 CXX test/cpp_headers/util.o 00:03:05.727 CXX test/cpp_headers/uuid.o 00:03:05.727 CXX test/cpp_headers/version.o 00:03:05.727 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.727 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.985 CXX test/cpp_headers/vhost.o 00:03:05.985 CXX test/cpp_headers/vmd.o 00:03:05.985 CXX test/cpp_headers/xor.o 00:03:05.985 CXX test/cpp_headers/zipf.o 00:03:05.985 LINK cuse 00:03:06.919 LINK esnap 00:03:07.178 ************************************ 00:03:07.178 END TEST make 00:03:07.178 ************************************ 00:03:07.178 00:03:07.178 real 1m3.002s 00:03:07.178 user 5m55.927s 00:03:07.178 sys 1m3.155s 00:03:07.178 10:31:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:07.178 10:31:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.178 10:31:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.178 10:31:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.178 10:31:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.178 10:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.178 10:31:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.178 10:31:32 -- pm/common@44 -- $ pid=5071 00:03:07.178 10:31:32 -- pm/common@50 -- $ kill -TERM 5071 00:03:07.178 10:31:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.178 10:31:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.178 10:31:32 -- pm/common@44 -- $ pid=5072 00:03:07.178 10:31:32 -- pm/common@50 -- $ kill -TERM 5072 00:03:07.178 10:31:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:07.178 10:31:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.178 10:31:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:07.178 10:31:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:07.178 10:31:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:07.436 10:31:33 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:07.436 10:31:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:07.436 10:31:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:07.436 10:31:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:07.436 10:31:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:07.436 10:31:33 -- scripts/common.sh@336 -- # read -ra ver1 00:03:07.436 10:31:33 -- scripts/common.sh@337 -- # IFS=.-: 00:03:07.436 10:31:33 -- scripts/common.sh@337 -- # read -ra ver2 00:03:07.436 10:31:33 -- scripts/common.sh@338 -- # local 'op=<' 00:03:07.436 10:31:33 -- scripts/common.sh@340 -- # ver1_l=2 00:03:07.436 10:31:33 -- scripts/common.sh@341 -- # ver2_l=1 00:03:07.436 10:31:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:07.436 10:31:33 -- scripts/common.sh@344 -- # case "$op" in 00:03:07.436 10:31:33 -- scripts/common.sh@345 -- # : 1 00:03:07.436 10:31:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:07.436 10:31:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:07.436 10:31:33 -- scripts/common.sh@365 -- # decimal 1 00:03:07.436 10:31:33 -- scripts/common.sh@353 -- # local d=1 00:03:07.436 10:31:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:07.436 10:31:33 -- scripts/common.sh@355 -- # echo 1 00:03:07.436 10:31:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:07.436 10:31:33 -- scripts/common.sh@366 -- # decimal 2 00:03:07.436 10:31:33 -- scripts/common.sh@353 -- # local d=2 00:03:07.436 10:31:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:07.436 10:31:33 -- scripts/common.sh@355 -- # echo 2 00:03:07.436 10:31:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:07.436 10:31:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:07.436 10:31:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:07.436 10:31:33 -- scripts/common.sh@368 -- # return 0 00:03:07.436 10:31:33 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:07.437 10:31:33 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:07.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.437 --rc genhtml_branch_coverage=1 00:03:07.437 --rc genhtml_function_coverage=1 00:03:07.437 --rc genhtml_legend=1 00:03:07.437 --rc geninfo_all_blocks=1 00:03:07.437 --rc geninfo_unexecuted_blocks=1 00:03:07.437 00:03:07.437 ' 00:03:07.437 10:31:33 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:07.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.437 --rc genhtml_branch_coverage=1 00:03:07.437 --rc genhtml_function_coverage=1 00:03:07.437 --rc genhtml_legend=1 00:03:07.437 --rc geninfo_all_blocks=1 00:03:07.437 --rc geninfo_unexecuted_blocks=1 00:03:07.437 00:03:07.437 ' 00:03:07.437 10:31:33 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:07.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.437 --rc genhtml_branch_coverage=1 00:03:07.437 --rc genhtml_function_coverage=1 00:03:07.437 --rc genhtml_legend=1 00:03:07.437 --rc geninfo_all_blocks=1 00:03:07.437 --rc geninfo_unexecuted_blocks=1 00:03:07.437 00:03:07.437 ' 00:03:07.437 10:31:33 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:07.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.437 --rc genhtml_branch_coverage=1 00:03:07.437 --rc genhtml_function_coverage=1 00:03:07.437 --rc genhtml_legend=1 00:03:07.437 --rc geninfo_all_blocks=1 00:03:07.437 --rc geninfo_unexecuted_blocks=1 00:03:07.437 00:03:07.437 ' 00:03:07.437 10:31:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:07.437 10:31:33 -- nvmf/common.sh@7 -- # uname -s 00:03:07.437 10:31:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:07.437 10:31:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:07.437 10:31:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:07.437 10:31:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:07.437 10:31:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:07.437 10:31:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:07.437 10:31:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:07.437 10:31:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:07.437 10:31:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:07.437 10:31:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:07.437 10:31:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:03:07.437 10:31:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:03:07.437 10:31:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:07.437 10:31:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:07.437 10:31:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:07.437 10:31:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:07.437 10:31:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.437 10:31:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:07.437 10:31:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:07.437 10:31:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.437 10:31:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.437 10:31:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.437 10:31:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.437 10:31:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.437 10:31:33 -- paths/export.sh@5 -- # export PATH 00:03:07.437 10:31:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.437 10:31:33 -- nvmf/common.sh@51 -- # : 0 00:03:07.437 10:31:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:07.437 10:31:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:07.437 10:31:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:07.437 10:31:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:07.437 10:31:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:07.437 10:31:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:07.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:07.437 10:31:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:07.437 10:31:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:07.437 10:31:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:07.437 10:31:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:07.437 10:31:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:07.437 10:31:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:07.437 10:31:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:07.437 10:31:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.437 10:31:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:07.437 10:31:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.437 10:31:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:07.437 10:31:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:07.437 10:31:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:07.437 10:31:33 -- spdk/autotest.sh@48 -- # udevadm_pid=54217 00:03:07.437 10:31:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:07.437 10:31:33 -- pm/common@17 -- # local monitor 00:03:07.437 10:31:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.437 10:31:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:07.437 10:31:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.437 10:31:33 -- pm/common@25 -- # sleep 1 00:03:07.437 10:31:33 -- pm/common@21 -- # date +%s 00:03:07.437 10:31:33 -- pm/common@21 -- # date +%s 00:03:07.437 10:31:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731925893 00:03:07.437 10:31:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731925893 00:03:07.437 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731925893_collect-cpu-load.pm.log 00:03:07.437 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731925893_collect-vmstat.pm.log 00:03:08.371 10:31:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.371 10:31:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.371 10:31:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:08.371 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:08.371 10:31:34 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.371 10:31:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:08.372 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:08.372 10:31:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:08.372 10:31:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:08.372 10:31:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:08.372 10:31:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:08.372 10:31:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:08.372 10:31:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:08.372 10:31:34 -- common/autotest_common.sh@1457 -- # uname 00:03:08.372 10:31:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:08.372 10:31:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:08.372 10:31:34 -- common/autotest_common.sh@1477 -- # uname 00:03:08.372 10:31:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:08.372 10:31:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:08.372 10:31:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:08.630 lcov: LCOV version 1.15 00:03:08.630 10:31:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:23.566 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.566 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.548 10:32:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:38.548 10:32:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.548 10:32:02 -- common/autotest_common.sh@10 -- # set +x 00:03:38.548 10:32:02 -- spdk/autotest.sh@78 -- # rm -f 00:03:38.548 10:32:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.548 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:38.548 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:38.548 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:38.548 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:38.548 10:32:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:38.548 10:32:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:38.548 10:32:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:38.548 10:32:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:38.548 10:32:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:38.548 10:32:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:38.548 10:32:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:38.548 10:32:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:38.548 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.548 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.548 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:38.548 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:38.548 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.548 No valid GPT data, bailing 00:03:38.548 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.548 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.548 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.548 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.548 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0482994 s, 21.7 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.549 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.549 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:38.549 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:38.549 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:38.549 No valid GPT data, bailing 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.549 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.549 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:38.549 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00568565 s, 184 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.549 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.549 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:38.549 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:38.549 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:38.549 No valid GPT data, bailing 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.549 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.549 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:38.549 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413169 s, 254 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.549 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.549 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:38.549 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:38.549 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:38.549 No valid GPT data, bailing 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.549 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.549 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:38.549 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477271 s, 220 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.549 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.549 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:38.549 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:38.549 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:38.549 No valid GPT data, bailing 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.549 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.549 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:38.549 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558214 s, 188 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.549 10:32:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.549 10:32:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:38.549 10:32:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:38.549 10:32:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:38.549 No valid GPT data, bailing 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:38.549 10:32:03 -- scripts/common.sh@394 -- # pt= 00:03:38.549 10:32:03 -- scripts/common.sh@395 -- # return 1 00:03:38.549 10:32:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:38.549 1+0 records in 00:03:38.549 1+0 records out 00:03:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439622 s, 239 MB/s 00:03:38.549 10:32:03 -- spdk/autotest.sh@105 -- # sync 00:03:38.549 10:32:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:38.549 10:32:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:38.549 10:32:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.934 10:32:05 -- spdk/autotest.sh@111 -- # uname -s 00:03:39.934 10:32:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:39.934 10:32:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:39.934 10:32:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.774 Hugepages 00:03:40.774 node hugesize free / total 00:03:40.774 node0 1048576kB 0 / 0 00:03:40.774 node0 2048kB 0 / 0 00:03:40.774 00:03:40.774 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.774 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:40.774 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:40.774 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:41.053 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:41.053 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:41.053 10:32:06 -- spdk/autotest.sh@117 -- # uname -s 00:03:41.053 10:32:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:41.053 10:32:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:41.053 10:32:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.886 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.148 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.148 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.148 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.148 10:32:07 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:43.088 10:32:08 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:43.088 10:32:08 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:43.088 10:32:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.088 10:32:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:43.088 10:32:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.088 10:32:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.088 10:32:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.088 10:32:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:43.088 10:32:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.088 10:32:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:43.088 10:32:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:43.088 10:32:08 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.654 Waiting for block devices as requested 00:03:43.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:43.654 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:43.912 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:43.912 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.207 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:49.207 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.207 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:49.207 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.207 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:49.207 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.207 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:49.207 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:49.207 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.207 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.207 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.207 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.207 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.207 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:49.207 10:32:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:49.207 10:32:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.207 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.207 10:32:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:49.207 10:32:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.207 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.207 10:32:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.033 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.033 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.033 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.033 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.033 10:32:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.033 10:32:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.033 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:50.033 10:32:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:50.033 10:32:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:50.033 10:32:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.033 10:32:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:50.033 10:32:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:50.033 10:32:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:50.033 10:32:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:50.033 10:32:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:50.033 10:32:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:50.033 10:32:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:50.033 10:32:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.033 10:32:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.033 10:32:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.292 10:32:15 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:50.292 10:32:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:50.292 10:32:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.292 10:32:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.292 10:32:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.292 10:32:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.292 10:32:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.292 10:32:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.292 10:32:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:50.292 10:32:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.292 10:32:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.292 10:32:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:50.292 10:32:15 -- common/autotest_common.sh@1572 -- # return 0 00:03:50.292 10:32:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:50.292 10:32:15 -- common/autotest_common.sh@1580 -- # return 0 00:03:50.292 10:32:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.292 10:32:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.292 10:32:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.292 10:32:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.292 10:32:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.292 10:32:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.292 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:50.292 10:32:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.292 10:32:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.292 10:32:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.292 10:32:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.292 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:50.292 ************************************ 00:03:50.292 START TEST env 00:03:50.292 ************************************ 00:03:50.292 10:32:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.292 * Looking for test storage... 00:03:50.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.292 10:32:16 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.292 10:32:16 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.292 10:32:16 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.292 10:32:16 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.292 10:32:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.292 10:32:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.292 10:32:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.292 10:32:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.292 10:32:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.292 10:32:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.292 10:32:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.292 10:32:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.292 10:32:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.292 10:32:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.292 10:32:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.292 10:32:16 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.292 10:32:16 env -- scripts/common.sh@345 -- # : 1 00:03:50.292 10:32:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.292 10:32:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.293 10:32:16 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.293 10:32:16 env -- scripts/common.sh@353 -- # local d=1 00:03:50.293 10:32:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.293 10:32:16 env -- scripts/common.sh@355 -- # echo 1 00:03:50.293 10:32:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.293 10:32:16 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.293 10:32:16 env -- scripts/common.sh@353 -- # local d=2 00:03:50.293 10:32:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.293 10:32:16 env -- scripts/common.sh@355 -- # echo 2 00:03:50.293 10:32:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.293 10:32:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.293 10:32:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.293 10:32:16 env -- scripts/common.sh@368 -- # return 0 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.293 --rc genhtml_branch_coverage=1 00:03:50.293 --rc genhtml_function_coverage=1 00:03:50.293 --rc genhtml_legend=1 00:03:50.293 --rc geninfo_all_blocks=1 00:03:50.293 --rc geninfo_unexecuted_blocks=1 00:03:50.293 00:03:50.293 ' 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.293 --rc genhtml_branch_coverage=1 00:03:50.293 --rc genhtml_function_coverage=1 00:03:50.293 --rc genhtml_legend=1 00:03:50.293 --rc geninfo_all_blocks=1 00:03:50.293 --rc geninfo_unexecuted_blocks=1 00:03:50.293 00:03:50.293 ' 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.293 --rc genhtml_branch_coverage=1 00:03:50.293 --rc genhtml_function_coverage=1 00:03:50.293 --rc genhtml_legend=1 00:03:50.293 --rc geninfo_all_blocks=1 00:03:50.293 --rc geninfo_unexecuted_blocks=1 00:03:50.293 00:03:50.293 ' 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.293 --rc genhtml_branch_coverage=1 00:03:50.293 --rc genhtml_function_coverage=1 00:03:50.293 --rc genhtml_legend=1 00:03:50.293 --rc geninfo_all_blocks=1 00:03:50.293 --rc geninfo_unexecuted_blocks=1 00:03:50.293 00:03:50.293 ' 00:03:50.293 10:32:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.293 10:32:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.293 10:32:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.293 ************************************ 00:03:50.293 START TEST env_memory 00:03:50.293 ************************************ 00:03:50.293 10:32:16 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.293 00:03:50.293 00:03:50.293 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.293 http://cunit.sourceforge.net/ 00:03:50.293 00:03:50.293 00:03:50.293 Suite: memory 00:03:50.552 Test: alloc and free memory map ...[2024-11-18 10:32:16.177318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.552 passed 00:03:50.552 Test: mem map translation ...[2024-11-18 10:32:16.215994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.552 [2024-11-18 10:32:16.216042] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.552 [2024-11-18 10:32:16.216101] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.552 [2024-11-18 10:32:16.216116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.552 passed 00:03:50.552 Test: mem map registration ...[2024-11-18 10:32:16.284125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:50.552 [2024-11-18 10:32:16.284172] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:50.552 passed 00:03:50.552 Test: mem map adjacent registrations ...passed 00:03:50.552 00:03:50.552 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.552 suites 1 1 n/a 0 0 00:03:50.552 tests 4 4 4 0 0 00:03:50.552 asserts 152 152 152 0 n/a 00:03:50.552 00:03:50.552 Elapsed time = 0.233 seconds 00:03:50.552 00:03:50.552 real 0m0.262s 00:03:50.552 user 0m0.238s 00:03:50.552 sys 0m0.016s 00:03:50.552 10:32:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.552 10:32:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.552 ************************************ 00:03:50.552 END TEST env_memory 00:03:50.552 ************************************ 00:03:50.552 10:32:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.552 10:32:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.552 10:32:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.552 10:32:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.552 ************************************ 00:03:50.552 START TEST env_vtophys 00:03:50.552 ************************************ 00:03:50.552 10:32:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.811 EAL: lib.eal log level changed from notice to debug 00:03:50.811 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 1 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 2 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 3 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 4 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 5 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 6 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 7 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 8 as core 0 on socket 0 00:03:50.811 EAL: Detected lcore 9 as core 0 on socket 0 00:03:50.811 EAL: Maximum logical cores by configuration: 128 00:03:50.811 EAL: Detected CPU lcores: 10 00:03:50.811 EAL: Detected NUMA nodes: 1 00:03:50.811 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.811 EAL: Detected shared linkage of DPDK 00:03:50.811 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.811 EAL: Selected IOVA mode 'PA' 00:03:50.811 EAL: Probing VFIO support... 00:03:50.811 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:50.811 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:50.811 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.811 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.811 EAL: Setting up physically contiguous memory... 00:03:50.811 EAL: Setting maximum number of open files to 524288 00:03:50.811 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.811 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.811 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.811 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.811 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.811 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.811 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.811 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.811 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.811 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.811 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.811 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.811 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.811 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.811 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.811 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.811 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.811 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.811 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.811 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.811 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.811 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.811 EAL: Hugepages will be freed exactly as allocated. 00:03:50.811 EAL: No shared files mode enabled, IPC is disabled 00:03:50.811 EAL: No shared files mode enabled, IPC is disabled 00:03:50.811 EAL: TSC frequency is ~2600000 KHz 00:03:50.811 EAL: Main lcore 0 is ready (tid=7f63ad2d5a40;cpuset=[0]) 00:03:50.811 EAL: Trying to obtain current memory policy. 00:03:50.811 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.811 EAL: Restoring previous memory policy: 0 00:03:50.811 EAL: request: mp_malloc_sync 00:03:50.811 EAL: No shared files mode enabled, IPC is disabled 00:03:50.811 EAL: Heap on socket 0 was expanded by 2MB 00:03:50.811 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:50.811 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:50.811 EAL: Mem event callback 'spdk:(nil)' registered 00:03:50.811 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:50.811 00:03:50.811 00:03:50.811 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.811 http://cunit.sourceforge.net/ 00:03:50.811 00:03:50.811 00:03:50.811 Suite: components_suite 00:03:51.070 Test: vtophys_malloc_test ...passed 00:03:51.070 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.070 EAL: Restoring previous memory policy: 4 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.070 EAL: Trying to obtain current memory policy. 00:03:51.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.070 EAL: Restoring previous memory policy: 4 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.070 EAL: Trying to obtain current memory policy. 00:03:51.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.070 EAL: Restoring previous memory policy: 4 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.070 EAL: Trying to obtain current memory policy. 00:03:51.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.070 EAL: Restoring previous memory policy: 4 00:03:51.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.070 EAL: request: mp_malloc_sync 00:03:51.070 EAL: No shared files mode enabled, IPC is disabled 00:03:51.070 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.328 EAL: request: mp_malloc_sync 00:03:51.328 EAL: No shared files mode enabled, IPC is disabled 00:03:51.328 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.328 EAL: Trying to obtain current memory policy. 00:03:51.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.328 EAL: Restoring previous memory policy: 4 00:03:51.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.328 EAL: request: mp_malloc_sync 00:03:51.328 EAL: No shared files mode enabled, IPC is disabled 00:03:51.328 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.328 EAL: request: mp_malloc_sync 00:03:51.328 EAL: No shared files mode enabled, IPC is disabled 00:03:51.328 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.328 EAL: Trying to obtain current memory policy. 00:03:51.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.328 EAL: Restoring previous memory policy: 4 00:03:51.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.328 EAL: request: mp_malloc_sync 00:03:51.328 EAL: No shared files mode enabled, IPC is disabled 00:03:51.328 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.328 EAL: request: mp_malloc_sync 00:03:51.328 EAL: No shared files mode enabled, IPC is disabled 00:03:51.328 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.328 EAL: Trying to obtain current memory policy. 00:03:51.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.587 EAL: Restoring previous memory policy: 4 00:03:51.587 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.587 EAL: request: mp_malloc_sync 00:03:51.587 EAL: No shared files mode enabled, IPC is disabled 00:03:51.587 EAL: Heap on socket 0 was expanded by 130MB 00:03:51.587 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.587 EAL: request: mp_malloc_sync 00:03:51.587 EAL: No shared files mode enabled, IPC is disabled 00:03:51.587 EAL: Heap on socket 0 was shrunk by 130MB 00:03:51.845 EAL: Trying to obtain current memory policy. 00:03:51.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.845 EAL: Restoring previous memory policy: 4 00:03:51.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.845 EAL: request: mp_malloc_sync 00:03:51.845 EAL: No shared files mode enabled, IPC is disabled 00:03:51.845 EAL: Heap on socket 0 was expanded by 258MB 00:03:52.103 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.103 EAL: request: mp_malloc_sync 00:03:52.103 EAL: No shared files mode enabled, IPC is disabled 00:03:52.103 EAL: Heap on socket 0 was shrunk by 258MB 00:03:52.361 EAL: Trying to obtain current memory policy. 00:03:52.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.361 EAL: Restoring previous memory policy: 4 00:03:52.361 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.361 EAL: request: mp_malloc_sync 00:03:52.361 EAL: No shared files mode enabled, IPC is disabled 00:03:52.361 EAL: Heap on socket 0 was expanded by 514MB 00:03:52.928 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.187 EAL: request: mp_malloc_sync 00:03:53.187 EAL: No shared files mode enabled, IPC is disabled 00:03:53.187 EAL: Heap on socket 0 was shrunk by 514MB 00:03:53.787 EAL: Trying to obtain current memory policy. 00:03:53.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.787 EAL: Restoring previous memory policy: 4 00:03:53.787 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.787 EAL: request: mp_malloc_sync 00:03:53.787 EAL: No shared files mode enabled, IPC is disabled 00:03:53.787 EAL: Heap on socket 0 was expanded by 1026MB 00:03:55.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.171 EAL: request: mp_malloc_sync 00:03:55.171 EAL: No shared files mode enabled, IPC is disabled 00:03:55.171 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.736 passed 00:03:55.736 00:03:55.736 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.736 suites 1 1 n/a 0 0 00:03:55.736 tests 2 2 2 0 0 00:03:55.736 asserts 5824 5824 5824 0 n/a 00:03:55.736 00:03:55.737 Elapsed time = 4.837 seconds 00:03:55.737 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.737 EAL: request: mp_malloc_sync 00:03:55.737 EAL: No shared files mode enabled, IPC is disabled 00:03:55.737 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.737 EAL: No shared files mode enabled, IPC is disabled 00:03:55.737 EAL: No shared files mode enabled, IPC is disabled 00:03:55.737 EAL: No shared files mode enabled, IPC is disabled 00:03:55.737 ************************************ 00:03:55.737 END TEST env_vtophys 00:03:55.737 ************************************ 00:03:55.737 00:03:55.737 real 0m5.101s 00:03:55.737 user 0m4.319s 00:03:55.737 sys 0m0.628s 00:03:55.737 10:32:21 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.737 10:32:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.737 10:32:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.737 10:32:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.737 10:32:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.737 10:32:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.737 ************************************ 00:03:55.737 START TEST env_pci 00:03:55.737 ************************************ 00:03:55.737 10:32:21 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.737 00:03:55.737 00:03:55.737 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.737 http://cunit.sourceforge.net/ 00:03:55.737 00:03:55.737 00:03:55.737 Suite: pci 00:03:55.737 Test: pci_hook ...[2024-11-18 10:32:21.578969] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56965 has claimed it 00:03:55.737 passed 00:03:55.737 00:03:55.737 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.737 suites 1 1 n/a 0 0 00:03:55.737 tests 1 1 1 0 0 00:03:55.737 asserts 25 25 25 0 n/a 00:03:55.737 00:03:55.737 Elapsed time = 0.004 seconds 00:03:55.737 EAL: Cannot find device (10000:00:01.0) 00:03:55.737 EAL: Failed to attach device on primary process 00:03:55.737 00:03:55.737 real 0m0.053s 00:03:55.737 user 0m0.026s 00:03:55.737 sys 0m0.027s 00:03:55.737 10:32:21 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.737 10:32:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.737 ************************************ 00:03:55.737 END TEST env_pci 00:03:55.737 ************************************ 00:03:55.995 10:32:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.995 10:32:21 env -- env/env.sh@15 -- # uname 00:03:55.995 10:32:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.995 10:32:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.995 10:32:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.995 10:32:21 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:55.995 10:32:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.995 10:32:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 START TEST env_dpdk_post_init 00:03:55.995 ************************************ 00:03:55.996 10:32:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.996 EAL: Detected CPU lcores: 10 00:03:55.996 EAL: Detected NUMA nodes: 1 00:03:55.996 EAL: Detected shared linkage of DPDK 00:03:55.996 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.996 EAL: Selected IOVA mode 'PA' 00:03:55.996 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.996 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:55.996 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:55.996 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:55.996 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:55.996 Starting DPDK initialization... 00:03:55.996 Starting SPDK post initialization... 00:03:55.996 SPDK NVMe probe 00:03:55.996 Attaching to 0000:00:10.0 00:03:55.996 Attaching to 0000:00:11.0 00:03:55.996 Attaching to 0000:00:12.0 00:03:55.996 Attaching to 0000:00:13.0 00:03:55.996 Attached to 0000:00:10.0 00:03:55.996 Attached to 0000:00:11.0 00:03:55.996 Attached to 0000:00:13.0 00:03:55.996 Attached to 0000:00:12.0 00:03:55.996 Cleaning up... 00:03:56.254 00:03:56.254 real 0m0.228s 00:03:56.254 user 0m0.068s 00:03:56.254 sys 0m0.060s 00:03:56.254 ************************************ 00:03:56.254 END TEST env_dpdk_post_init 00:03:56.254 ************************************ 00:03:56.254 10:32:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.254 10:32:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.254 10:32:21 env -- env/env.sh@26 -- # uname 00:03:56.254 10:32:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:56.254 10:32:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.254 10:32:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.254 10:32:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.254 10:32:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.254 ************************************ 00:03:56.254 START TEST env_mem_callbacks 00:03:56.254 ************************************ 00:03:56.254 10:32:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.254 EAL: Detected CPU lcores: 10 00:03:56.254 EAL: Detected NUMA nodes: 1 00:03:56.254 EAL: Detected shared linkage of DPDK 00:03:56.254 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.254 EAL: Selected IOVA mode 'PA' 00:03:56.254 00:03:56.254 00:03:56.254 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.254 http://cunit.sourceforge.net/ 00:03:56.254 00:03:56.254 00:03:56.254 Suite: memory 00:03:56.254 Test: test ... 00:03:56.254 register 0x200000200000 2097152 00:03:56.254 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.254 malloc 3145728 00:03:56.254 register 0x200000400000 4194304 00:03:56.254 buf 0x2000004fffc0 len 3145728 PASSED 00:03:56.254 malloc 64 00:03:56.254 buf 0x2000004ffec0 len 64 PASSED 00:03:56.254 malloc 4194304 00:03:56.254 register 0x200000800000 6291456 00:03:56.254 buf 0x2000009fffc0 len 4194304 PASSED 00:03:56.254 free 0x2000004fffc0 3145728 00:03:56.254 free 0x2000004ffec0 64 00:03:56.254 unregister 0x200000400000 4194304 PASSED 00:03:56.254 free 0x2000009fffc0 4194304 00:03:56.254 unregister 0x200000800000 6291456 PASSED 00:03:56.254 malloc 8388608 00:03:56.254 register 0x200000400000 10485760 00:03:56.254 buf 0x2000005fffc0 len 8388608 PASSED 00:03:56.254 free 0x2000005fffc0 8388608 00:03:56.254 unregister 0x200000400000 10485760 PASSED 00:03:56.254 passed 00:03:56.254 00:03:56.254 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.254 suites 1 1 n/a 0 0 00:03:56.254 tests 1 1 1 0 0 00:03:56.254 asserts 15 15 15 0 n/a 00:03:56.254 00:03:56.254 Elapsed time = 0.039 seconds 00:03:56.254 00:03:56.254 real 0m0.207s 00:03:56.254 user 0m0.059s 00:03:56.254 sys 0m0.045s 00:03:56.254 10:32:22 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.254 10:32:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:56.254 ************************************ 00:03:56.255 END TEST env_mem_callbacks 00:03:56.255 ************************************ 00:03:56.512 00:03:56.512 real 0m6.180s 00:03:56.512 user 0m4.855s 00:03:56.512 sys 0m0.966s 00:03:56.512 10:32:22 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.512 10:32:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.512 ************************************ 00:03:56.512 END TEST env 00:03:56.512 ************************************ 00:03:56.512 10:32:22 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:56.512 10:32:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.512 10:32:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.512 10:32:22 -- common/autotest_common.sh@10 -- # set +x 00:03:56.512 ************************************ 00:03:56.512 START TEST rpc 00:03:56.512 ************************************ 00:03:56.512 10:32:22 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:56.512 * Looking for test storage... 00:03:56.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.512 10:32:22 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.513 10:32:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.513 10:32:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.513 10:32:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.513 10:32:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.513 10:32:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.513 10:32:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.513 10:32:22 rpc -- scripts/common.sh@345 -- # : 1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.513 10:32:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.513 10:32:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.513 10:32:22 rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.513 10:32:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.513 10:32:22 rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.513 10:32:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.513 10:32:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.513 10:32:22 rpc -- scripts/common.sh@368 -- # return 0 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.513 --rc genhtml_branch_coverage=1 00:03:56.513 --rc genhtml_function_coverage=1 00:03:56.513 --rc genhtml_legend=1 00:03:56.513 --rc geninfo_all_blocks=1 00:03:56.513 --rc geninfo_unexecuted_blocks=1 00:03:56.513 00:03:56.513 ' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.513 --rc genhtml_branch_coverage=1 00:03:56.513 --rc genhtml_function_coverage=1 00:03:56.513 --rc genhtml_legend=1 00:03:56.513 --rc geninfo_all_blocks=1 00:03:56.513 --rc geninfo_unexecuted_blocks=1 00:03:56.513 00:03:56.513 ' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.513 --rc genhtml_branch_coverage=1 00:03:56.513 --rc genhtml_function_coverage=1 00:03:56.513 --rc genhtml_legend=1 00:03:56.513 --rc geninfo_all_blocks=1 00:03:56.513 --rc geninfo_unexecuted_blocks=1 00:03:56.513 00:03:56.513 ' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.513 --rc genhtml_branch_coverage=1 00:03:56.513 --rc genhtml_function_coverage=1 00:03:56.513 --rc genhtml_legend=1 00:03:56.513 --rc geninfo_all_blocks=1 00:03:56.513 --rc geninfo_unexecuted_blocks=1 00:03:56.513 00:03:56.513 ' 00:03:56.513 10:32:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:56.513 10:32:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57087 00:03:56.513 10:32:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.513 10:32:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57087 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 57087 ']' 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.513 10:32:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.513 [2024-11-18 10:32:22.394335] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:56.513 [2024-11-18 10:32:22.394646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57087 ] 00:03:56.771 [2024-11-18 10:32:22.553467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.771 [2024-11-18 10:32:22.652265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.771 [2024-11-18 10:32:22.652434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57087' to capture a snapshot of events at runtime. 00:03:56.771 [2024-11-18 10:32:22.652499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.771 [2024-11-18 10:32:22.652532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.771 [2024-11-18 10:32:22.652551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57087 for offline analysis/debug. 00:03:56.771 [2024-11-18 10:32:22.653420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.706 10:32:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.706 10:32:23 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:57.706 10:32:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.706 10:32:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.706 10:32:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:57.706 10:32:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:57.706 10:32:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.706 10:32:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.706 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.706 ************************************ 00:03:57.706 START TEST rpc_integrity 00:03:57.706 ************************************ 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.706 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.706 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.706 { 00:03:57.706 "name": "Malloc0", 00:03:57.706 "aliases": [ 00:03:57.706 "d6a8aeac-1fe0-4fb2-b346-457a4634946a" 00:03:57.706 ], 00:03:57.706 "product_name": "Malloc disk", 00:03:57.706 "block_size": 512, 00:03:57.706 "num_blocks": 16384, 00:03:57.706 "uuid": "d6a8aeac-1fe0-4fb2-b346-457a4634946a", 00:03:57.706 "assigned_rate_limits": { 00:03:57.706 "rw_ios_per_sec": 0, 00:03:57.706 "rw_mbytes_per_sec": 0, 00:03:57.706 "r_mbytes_per_sec": 0, 00:03:57.706 "w_mbytes_per_sec": 0 00:03:57.706 }, 00:03:57.706 "claimed": false, 00:03:57.706 "zoned": false, 00:03:57.706 "supported_io_types": { 00:03:57.706 "read": true, 00:03:57.706 "write": true, 00:03:57.706 "unmap": true, 00:03:57.706 "flush": true, 00:03:57.706 "reset": true, 00:03:57.706 "nvme_admin": false, 00:03:57.706 "nvme_io": false, 00:03:57.706 "nvme_io_md": false, 00:03:57.706 "write_zeroes": true, 00:03:57.706 "zcopy": true, 00:03:57.706 "get_zone_info": false, 00:03:57.706 "zone_management": false, 00:03:57.706 "zone_append": false, 00:03:57.706 "compare": false, 00:03:57.706 "compare_and_write": false, 00:03:57.706 "abort": true, 00:03:57.707 "seek_hole": false, 00:03:57.707 "seek_data": false, 00:03:57.707 "copy": true, 00:03:57.707 "nvme_iov_md": false 00:03:57.707 }, 00:03:57.707 "memory_domains": [ 00:03:57.707 { 00:03:57.707 "dma_device_id": "system", 00:03:57.707 "dma_device_type": 1 00:03:57.707 }, 00:03:57.707 { 00:03:57.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.707 "dma_device_type": 2 00:03:57.707 } 00:03:57.707 ], 00:03:57.707 "driver_specific": {} 00:03:57.707 } 00:03:57.707 ]' 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 [2024-11-18 10:32:23.355290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.707 [2024-11-18 10:32:23.355349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.707 [2024-11-18 10:32:23.355379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:57.707 [2024-11-18 10:32:23.355392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.707 [2024-11-18 10:32:23.357591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.707 [2024-11-18 10:32:23.357631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.707 Passthru0 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.707 { 00:03:57.707 "name": "Malloc0", 00:03:57.707 "aliases": [ 00:03:57.707 "d6a8aeac-1fe0-4fb2-b346-457a4634946a" 00:03:57.707 ], 00:03:57.707 "product_name": "Malloc disk", 00:03:57.707 "block_size": 512, 00:03:57.707 "num_blocks": 16384, 00:03:57.707 "uuid": "d6a8aeac-1fe0-4fb2-b346-457a4634946a", 00:03:57.707 "assigned_rate_limits": { 00:03:57.707 "rw_ios_per_sec": 0, 00:03:57.707 "rw_mbytes_per_sec": 0, 00:03:57.707 "r_mbytes_per_sec": 0, 00:03:57.707 "w_mbytes_per_sec": 0 00:03:57.707 }, 00:03:57.707 "claimed": true, 00:03:57.707 "claim_type": "exclusive_write", 00:03:57.707 "zoned": false, 00:03:57.707 "supported_io_types": { 00:03:57.707 "read": true, 00:03:57.707 "write": true, 00:03:57.707 "unmap": true, 00:03:57.707 "flush": true, 00:03:57.707 "reset": true, 00:03:57.707 "nvme_admin": false, 00:03:57.707 "nvme_io": false, 00:03:57.707 "nvme_io_md": false, 00:03:57.707 "write_zeroes": true, 00:03:57.707 "zcopy": true, 00:03:57.707 "get_zone_info": false, 00:03:57.707 "zone_management": false, 00:03:57.707 "zone_append": false, 00:03:57.707 "compare": false, 00:03:57.707 "compare_and_write": false, 00:03:57.707 "abort": true, 00:03:57.707 "seek_hole": false, 00:03:57.707 "seek_data": false, 00:03:57.707 "copy": true, 00:03:57.707 "nvme_iov_md": false 00:03:57.707 }, 00:03:57.707 "memory_domains": [ 00:03:57.707 { 00:03:57.707 "dma_device_id": "system", 00:03:57.707 "dma_device_type": 1 00:03:57.707 }, 00:03:57.707 { 00:03:57.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.707 "dma_device_type": 2 00:03:57.707 } 00:03:57.707 ], 00:03:57.707 "driver_specific": {} 00:03:57.707 }, 00:03:57.707 { 00:03:57.707 "name": "Passthru0", 00:03:57.707 "aliases": [ 00:03:57.707 "bdb0d6b2-5006-53e7-a5d0-11536a8f874e" 00:03:57.707 ], 00:03:57.707 "product_name": "passthru", 00:03:57.707 "block_size": 512, 00:03:57.707 "num_blocks": 16384, 00:03:57.707 "uuid": "bdb0d6b2-5006-53e7-a5d0-11536a8f874e", 00:03:57.707 "assigned_rate_limits": { 00:03:57.707 "rw_ios_per_sec": 0, 00:03:57.707 "rw_mbytes_per_sec": 0, 00:03:57.707 "r_mbytes_per_sec": 0, 00:03:57.707 "w_mbytes_per_sec": 0 00:03:57.707 }, 00:03:57.707 "claimed": false, 00:03:57.707 "zoned": false, 00:03:57.707 "supported_io_types": { 00:03:57.707 "read": true, 00:03:57.707 "write": true, 00:03:57.707 "unmap": true, 00:03:57.707 "flush": true, 00:03:57.707 "reset": true, 00:03:57.707 "nvme_admin": false, 00:03:57.707 "nvme_io": false, 00:03:57.707 "nvme_io_md": false, 00:03:57.707 "write_zeroes": true, 00:03:57.707 "zcopy": true, 00:03:57.707 "get_zone_info": false, 00:03:57.707 "zone_management": false, 00:03:57.707 "zone_append": false, 00:03:57.707 "compare": false, 00:03:57.707 "compare_and_write": false, 00:03:57.707 "abort": true, 00:03:57.707 "seek_hole": false, 00:03:57.707 "seek_data": false, 00:03:57.707 "copy": true, 00:03:57.707 "nvme_iov_md": false 00:03:57.707 }, 00:03:57.707 "memory_domains": [ 00:03:57.707 { 00:03:57.707 "dma_device_id": "system", 00:03:57.707 "dma_device_type": 1 00:03:57.707 }, 00:03:57.707 { 00:03:57.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.707 "dma_device_type": 2 00:03:57.707 } 00:03:57.707 ], 00:03:57.707 "driver_specific": { 00:03:57.707 "passthru": { 00:03:57.707 "name": "Passthru0", 00:03:57.707 "base_bdev_name": "Malloc0" 00:03:57.707 } 00:03:57.707 } 00:03:57.707 } 00:03:57.707 ]' 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.707 ************************************ 00:03:57.707 END TEST rpc_integrity 00:03:57.707 ************************************ 00:03:57.707 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.707 00:03:57.707 real 0m0.236s 00:03:57.707 user 0m0.132s 00:03:57.707 sys 0m0.026s 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.707 10:32:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.707 10:32:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.707 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 ************************************ 00:03:57.707 START TEST rpc_plugins 00:03:57.707 ************************************ 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:57.707 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.707 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.707 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.707 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.707 { 00:03:57.707 "name": "Malloc1", 00:03:57.707 "aliases": [ 00:03:57.707 "d0c707c7-fc82-424b-9b33-8c7659a4c32b" 00:03:57.707 ], 00:03:57.707 "product_name": "Malloc disk", 00:03:57.707 "block_size": 4096, 00:03:57.707 "num_blocks": 256, 00:03:57.707 "uuid": "d0c707c7-fc82-424b-9b33-8c7659a4c32b", 00:03:57.707 "assigned_rate_limits": { 00:03:57.707 "rw_ios_per_sec": 0, 00:03:57.707 "rw_mbytes_per_sec": 0, 00:03:57.707 "r_mbytes_per_sec": 0, 00:03:57.707 "w_mbytes_per_sec": 0 00:03:57.707 }, 00:03:57.707 "claimed": false, 00:03:57.707 "zoned": false, 00:03:57.707 "supported_io_types": { 00:03:57.707 "read": true, 00:03:57.707 "write": true, 00:03:57.707 "unmap": true, 00:03:57.707 "flush": true, 00:03:57.707 "reset": true, 00:03:57.707 "nvme_admin": false, 00:03:57.707 "nvme_io": false, 00:03:57.707 "nvme_io_md": false, 00:03:57.707 "write_zeroes": true, 00:03:57.707 "zcopy": true, 00:03:57.707 "get_zone_info": false, 00:03:57.707 "zone_management": false, 00:03:57.707 "zone_append": false, 00:03:57.707 "compare": false, 00:03:57.707 "compare_and_write": false, 00:03:57.707 "abort": true, 00:03:57.707 "seek_hole": false, 00:03:57.707 "seek_data": false, 00:03:57.707 "copy": true, 00:03:57.708 "nvme_iov_md": false 00:03:57.708 }, 00:03:57.708 "memory_domains": [ 00:03:57.708 { 00:03:57.708 "dma_device_id": "system", 00:03:57.708 "dma_device_type": 1 00:03:57.708 }, 00:03:57.708 { 00:03:57.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.708 "dma_device_type": 2 00:03:57.708 } 00:03:57.708 ], 00:03:57.708 "driver_specific": {} 00:03:57.708 } 00:03:57.708 ]' 00:03:57.708 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.966 ************************************ 00:03:57.966 END TEST rpc_plugins 00:03:57.966 ************************************ 00:03:57.966 10:32:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.966 00:03:57.966 real 0m0.116s 00:03:57.966 user 0m0.063s 00:03:57.966 sys 0m0.014s 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.966 10:32:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.966 10:32:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.966 10:32:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.966 10:32:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.966 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.966 ************************************ 00:03:57.966 START TEST rpc_trace_cmd_test 00:03:57.966 ************************************ 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.966 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57087", 00:03:57.966 "tpoint_group_mask": "0x8", 00:03:57.966 "iscsi_conn": { 00:03:57.966 "mask": "0x2", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "scsi": { 00:03:57.966 "mask": "0x4", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "bdev": { 00:03:57.966 "mask": "0x8", 00:03:57.966 "tpoint_mask": "0xffffffffffffffff" 00:03:57.966 }, 00:03:57.966 "nvmf_rdma": { 00:03:57.966 "mask": "0x10", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "nvmf_tcp": { 00:03:57.966 "mask": "0x20", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "ftl": { 00:03:57.966 "mask": "0x40", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "blobfs": { 00:03:57.966 "mask": "0x80", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "dsa": { 00:03:57.966 "mask": "0x200", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "thread": { 00:03:57.966 "mask": "0x400", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "nvme_pcie": { 00:03:57.966 "mask": "0x800", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "iaa": { 00:03:57.966 "mask": "0x1000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "nvme_tcp": { 00:03:57.966 "mask": "0x2000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "bdev_nvme": { 00:03:57.966 "mask": "0x4000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "sock": { 00:03:57.966 "mask": "0x8000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "blob": { 00:03:57.966 "mask": "0x10000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "bdev_raid": { 00:03:57.966 "mask": "0x20000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 }, 00:03:57.966 "scheduler": { 00:03:57.966 "mask": "0x40000", 00:03:57.966 "tpoint_mask": "0x0" 00:03:57.966 } 00:03:57.966 }' 00:03:57.966 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.967 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.225 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.225 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.225 ************************************ 00:03:58.225 END TEST rpc_trace_cmd_test 00:03:58.225 ************************************ 00:03:58.225 10:32:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.225 00:03:58.225 real 0m0.169s 00:03:58.225 user 0m0.134s 00:03:58.225 sys 0m0.024s 00:03:58.225 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.225 10:32:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.225 10:32:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.225 10:32:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.225 10:32:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.225 10:32:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.225 10:32:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.225 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.225 ************************************ 00:03:58.225 START TEST rpc_daemon_integrity 00:03:58.225 ************************************ 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.225 10:32:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.226 10:32:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.226 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.226 10:32:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.226 { 00:03:58.226 "name": "Malloc2", 00:03:58.226 "aliases": [ 00:03:58.226 "f183a08f-fd60-4780-8479-7dfbbb5aed75" 00:03:58.226 ], 00:03:58.226 "product_name": "Malloc disk", 00:03:58.226 "block_size": 512, 00:03:58.226 "num_blocks": 16384, 00:03:58.226 "uuid": "f183a08f-fd60-4780-8479-7dfbbb5aed75", 00:03:58.226 "assigned_rate_limits": { 00:03:58.226 "rw_ios_per_sec": 0, 00:03:58.226 "rw_mbytes_per_sec": 0, 00:03:58.226 "r_mbytes_per_sec": 0, 00:03:58.226 "w_mbytes_per_sec": 0 00:03:58.226 }, 00:03:58.226 "claimed": false, 00:03:58.226 "zoned": false, 00:03:58.226 "supported_io_types": { 00:03:58.226 "read": true, 00:03:58.226 "write": true, 00:03:58.226 "unmap": true, 00:03:58.226 "flush": true, 00:03:58.226 "reset": true, 00:03:58.226 "nvme_admin": false, 00:03:58.226 "nvme_io": false, 00:03:58.226 "nvme_io_md": false, 00:03:58.226 "write_zeroes": true, 00:03:58.226 "zcopy": true, 00:03:58.226 "get_zone_info": false, 00:03:58.226 "zone_management": false, 00:03:58.226 "zone_append": false, 00:03:58.226 "compare": false, 00:03:58.226 "compare_and_write": false, 00:03:58.226 "abort": true, 00:03:58.226 "seek_hole": false, 00:03:58.226 "seek_data": false, 00:03:58.226 "copy": true, 00:03:58.226 "nvme_iov_md": false 00:03:58.226 }, 00:03:58.226 "memory_domains": [ 00:03:58.226 { 00:03:58.226 "dma_device_id": "system", 00:03:58.226 "dma_device_type": 1 00:03:58.226 }, 00:03:58.226 { 00:03:58.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.226 "dma_device_type": 2 00:03:58.226 } 00:03:58.226 ], 00:03:58.226 "driver_specific": {} 00:03:58.226 } 00:03:58.226 ]' 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.226 [2024-11-18 10:32:24.062916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:58.226 [2024-11-18 10:32:24.063067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.226 [2024-11-18 10:32:24.063093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:58.226 [2024-11-18 10:32:24.063104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.226 [2024-11-18 10:32:24.065259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.226 [2024-11-18 10:32:24.065293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.226 Passthru0 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.226 { 00:03:58.226 "name": "Malloc2", 00:03:58.226 "aliases": [ 00:03:58.226 "f183a08f-fd60-4780-8479-7dfbbb5aed75" 00:03:58.226 ], 00:03:58.226 "product_name": "Malloc disk", 00:03:58.226 "block_size": 512, 00:03:58.226 "num_blocks": 16384, 00:03:58.226 "uuid": "f183a08f-fd60-4780-8479-7dfbbb5aed75", 00:03:58.226 "assigned_rate_limits": { 00:03:58.226 "rw_ios_per_sec": 0, 00:03:58.226 "rw_mbytes_per_sec": 0, 00:03:58.226 "r_mbytes_per_sec": 0, 00:03:58.226 "w_mbytes_per_sec": 0 00:03:58.226 }, 00:03:58.226 "claimed": true, 00:03:58.226 "claim_type": "exclusive_write", 00:03:58.226 "zoned": false, 00:03:58.226 "supported_io_types": { 00:03:58.226 "read": true, 00:03:58.226 "write": true, 00:03:58.226 "unmap": true, 00:03:58.226 "flush": true, 00:03:58.226 "reset": true, 00:03:58.226 "nvme_admin": false, 00:03:58.226 "nvme_io": false, 00:03:58.226 "nvme_io_md": false, 00:03:58.226 "write_zeroes": true, 00:03:58.226 "zcopy": true, 00:03:58.226 "get_zone_info": false, 00:03:58.226 "zone_management": false, 00:03:58.226 "zone_append": false, 00:03:58.226 "compare": false, 00:03:58.226 "compare_and_write": false, 00:03:58.226 "abort": true, 00:03:58.226 "seek_hole": false, 00:03:58.226 "seek_data": false, 00:03:58.226 "copy": true, 00:03:58.226 "nvme_iov_md": false 00:03:58.226 }, 00:03:58.226 "memory_domains": [ 00:03:58.226 { 00:03:58.226 "dma_device_id": "system", 00:03:58.226 "dma_device_type": 1 00:03:58.226 }, 00:03:58.226 { 00:03:58.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.226 "dma_device_type": 2 00:03:58.226 } 00:03:58.226 ], 00:03:58.226 "driver_specific": {} 00:03:58.226 }, 00:03:58.226 { 00:03:58.226 "name": "Passthru0", 00:03:58.226 "aliases": [ 00:03:58.226 "8d3e0fda-c821-5a88-8327-d4accc347fab" 00:03:58.226 ], 00:03:58.226 "product_name": "passthru", 00:03:58.226 "block_size": 512, 00:03:58.226 "num_blocks": 16384, 00:03:58.226 "uuid": "8d3e0fda-c821-5a88-8327-d4accc347fab", 00:03:58.226 "assigned_rate_limits": { 00:03:58.226 "rw_ios_per_sec": 0, 00:03:58.226 "rw_mbytes_per_sec": 0, 00:03:58.226 "r_mbytes_per_sec": 0, 00:03:58.226 "w_mbytes_per_sec": 0 00:03:58.226 }, 00:03:58.226 "claimed": false, 00:03:58.226 "zoned": false, 00:03:58.226 "supported_io_types": { 00:03:58.226 "read": true, 00:03:58.226 "write": true, 00:03:58.226 "unmap": true, 00:03:58.226 "flush": true, 00:03:58.226 "reset": true, 00:03:58.226 "nvme_admin": false, 00:03:58.226 "nvme_io": false, 00:03:58.226 "nvme_io_md": false, 00:03:58.226 "write_zeroes": true, 00:03:58.226 "zcopy": true, 00:03:58.226 "get_zone_info": false, 00:03:58.226 "zone_management": false, 00:03:58.226 "zone_append": false, 00:03:58.226 "compare": false, 00:03:58.226 "compare_and_write": false, 00:03:58.226 "abort": true, 00:03:58.226 "seek_hole": false, 00:03:58.226 "seek_data": false, 00:03:58.226 "copy": true, 00:03:58.226 "nvme_iov_md": false 00:03:58.226 }, 00:03:58.226 "memory_domains": [ 00:03:58.226 { 00:03:58.226 "dma_device_id": "system", 00:03:58.226 "dma_device_type": 1 00:03:58.226 }, 00:03:58.226 { 00:03:58.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.226 "dma_device_type": 2 00:03:58.226 } 00:03:58.226 ], 00:03:58.226 "driver_specific": { 00:03:58.226 "passthru": { 00:03:58.226 "name": "Passthru0", 00:03:58.226 "base_bdev_name": "Malloc2" 00:03:58.226 } 00:03:58.226 } 00:03:58.226 } 00:03:58.226 ]' 00:03:58.226 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.485 ************************************ 00:03:58.485 END TEST rpc_daemon_integrity 00:03:58.485 ************************************ 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.485 00:03:58.485 real 0m0.249s 00:03:58.485 user 0m0.134s 00:03:58.485 sys 0m0.035s 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.485 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.485 10:32:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:58.485 10:32:24 rpc -- rpc/rpc.sh@84 -- # killprocess 57087 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 57087 ']' 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@958 -- # kill -0 57087 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@959 -- # uname 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57087 00:03:58.485 killing process with pid 57087 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57087' 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@973 -- # kill 57087 00:03:58.485 10:32:24 rpc -- common/autotest_common.sh@978 -- # wait 57087 00:03:59.861 00:03:59.861 real 0m3.389s 00:03:59.861 user 0m3.799s 00:03:59.861 sys 0m0.593s 00:03:59.861 10:32:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.861 ************************************ 00:03:59.861 END TEST rpc 00:03:59.861 ************************************ 00:03:59.861 10:32:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.861 10:32:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.861 10:32:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.861 10:32:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.862 10:32:25 -- common/autotest_common.sh@10 -- # set +x 00:03:59.862 ************************************ 00:03:59.862 START TEST skip_rpc 00:03:59.862 ************************************ 00:03:59.862 10:32:25 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.862 * Looking for test storage... 00:03:59.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.862 10:32:25 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.862 10:32:25 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.862 10:32:25 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.122 10:32:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.122 --rc genhtml_branch_coverage=1 00:04:00.122 --rc genhtml_function_coverage=1 00:04:00.122 --rc genhtml_legend=1 00:04:00.122 --rc geninfo_all_blocks=1 00:04:00.122 --rc geninfo_unexecuted_blocks=1 00:04:00.122 00:04:00.122 ' 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.122 --rc genhtml_branch_coverage=1 00:04:00.122 --rc genhtml_function_coverage=1 00:04:00.122 --rc genhtml_legend=1 00:04:00.122 --rc geninfo_all_blocks=1 00:04:00.122 --rc geninfo_unexecuted_blocks=1 00:04:00.122 00:04:00.122 ' 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.122 --rc genhtml_branch_coverage=1 00:04:00.122 --rc genhtml_function_coverage=1 00:04:00.122 --rc genhtml_legend=1 00:04:00.122 --rc geninfo_all_blocks=1 00:04:00.122 --rc geninfo_unexecuted_blocks=1 00:04:00.122 00:04:00.122 ' 00:04:00.122 10:32:25 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.122 --rc genhtml_branch_coverage=1 00:04:00.122 --rc genhtml_function_coverage=1 00:04:00.122 --rc genhtml_legend=1 00:04:00.122 --rc geninfo_all_blocks=1 00:04:00.122 --rc geninfo_unexecuted_blocks=1 00:04:00.122 00:04:00.122 ' 00:04:00.122 10:32:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:00.122 10:32:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:00.122 10:32:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:00.123 10:32:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.123 10:32:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.123 10:32:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.123 ************************************ 00:04:00.123 START TEST skip_rpc 00:04:00.123 ************************************ 00:04:00.123 10:32:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:00.123 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57299 00:04:00.123 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.123 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:00.123 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:00.123 [2024-11-18 10:32:25.869749] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:00.123 [2024-11-18 10:32:25.870054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57299 ] 00:04:00.384 [2024-11-18 10:32:26.029534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.384 [2024-11-18 10:32:26.117733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57299 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57299 ']' 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57299 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57299 00:04:05.652 killing process with pid 57299 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57299' 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57299 00:04:05.652 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57299 00:04:06.218 00:04:06.218 real 0m6.211s 00:04:06.218 user 0m5.832s 00:04:06.218 sys 0m0.275s 00:04:06.218 10:32:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.218 ************************************ 00:04:06.218 END TEST skip_rpc 00:04:06.218 ************************************ 00:04:06.218 10:32:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 10:32:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:06.218 10:32:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.218 10:32:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.218 10:32:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 ************************************ 00:04:06.218 START TEST skip_rpc_with_json 00:04:06.218 ************************************ 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57392 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57392 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57392 ']' 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.218 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.476 [2024-11-18 10:32:32.105084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:06.476 [2024-11-18 10:32:32.105184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57392 ] 00:04:06.476 [2024-11-18 10:32:32.248321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.476 [2024-11-18 10:32:32.328194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.411 [2024-11-18 10:32:32.944244] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.411 request: 00:04:07.411 { 00:04:07.411 "trtype": "tcp", 00:04:07.411 "method": "nvmf_get_transports", 00:04:07.411 "req_id": 1 00:04:07.411 } 00:04:07.411 Got JSON-RPC error response 00:04:07.411 response: 00:04:07.411 { 00:04:07.411 "code": -19, 00:04:07.411 "message": "No such device" 00:04:07.411 } 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.411 [2024-11-18 10:32:32.952320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.411 10:32:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.411 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.411 10:32:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:07.411 { 00:04:07.411 "subsystems": [ 00:04:07.411 { 00:04:07.411 "subsystem": "fsdev", 00:04:07.411 "config": [ 00:04:07.411 { 00:04:07.411 "method": "fsdev_set_opts", 00:04:07.411 "params": { 00:04:07.411 "fsdev_io_pool_size": 65535, 00:04:07.411 "fsdev_io_cache_size": 256 00:04:07.411 } 00:04:07.411 } 00:04:07.411 ] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "keyring", 00:04:07.411 "config": [] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "iobuf", 00:04:07.411 "config": [ 00:04:07.411 { 00:04:07.411 "method": "iobuf_set_options", 00:04:07.411 "params": { 00:04:07.411 "small_pool_count": 8192, 00:04:07.411 "large_pool_count": 1024, 00:04:07.411 "small_bufsize": 8192, 00:04:07.411 "large_bufsize": 135168, 00:04:07.411 "enable_numa": false 00:04:07.411 } 00:04:07.411 } 00:04:07.411 ] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "sock", 00:04:07.411 "config": [ 00:04:07.411 { 00:04:07.411 "method": "sock_set_default_impl", 00:04:07.411 "params": { 00:04:07.411 "impl_name": "posix" 00:04:07.411 } 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "method": "sock_impl_set_options", 00:04:07.411 "params": { 00:04:07.411 "impl_name": "ssl", 00:04:07.411 "recv_buf_size": 4096, 00:04:07.411 "send_buf_size": 4096, 00:04:07.411 "enable_recv_pipe": true, 00:04:07.411 "enable_quickack": false, 00:04:07.411 "enable_placement_id": 0, 00:04:07.411 "enable_zerocopy_send_server": true, 00:04:07.411 "enable_zerocopy_send_client": false, 00:04:07.411 "zerocopy_threshold": 0, 00:04:07.411 "tls_version": 0, 00:04:07.411 "enable_ktls": false 00:04:07.411 } 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "method": "sock_impl_set_options", 00:04:07.411 "params": { 00:04:07.411 "impl_name": "posix", 00:04:07.411 "recv_buf_size": 2097152, 00:04:07.411 "send_buf_size": 2097152, 00:04:07.411 "enable_recv_pipe": true, 00:04:07.411 "enable_quickack": false, 00:04:07.411 "enable_placement_id": 0, 00:04:07.411 "enable_zerocopy_send_server": true, 00:04:07.411 "enable_zerocopy_send_client": false, 00:04:07.411 "zerocopy_threshold": 0, 00:04:07.411 "tls_version": 0, 00:04:07.411 "enable_ktls": false 00:04:07.411 } 00:04:07.411 } 00:04:07.411 ] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "vmd", 00:04:07.411 "config": [] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "accel", 00:04:07.411 "config": [ 00:04:07.411 { 00:04:07.411 "method": "accel_set_options", 00:04:07.411 "params": { 00:04:07.411 "small_cache_size": 128, 00:04:07.411 "large_cache_size": 16, 00:04:07.411 "task_count": 2048, 00:04:07.411 "sequence_count": 2048, 00:04:07.411 "buf_count": 2048 00:04:07.411 } 00:04:07.411 } 00:04:07.411 ] 00:04:07.411 }, 00:04:07.411 { 00:04:07.411 "subsystem": "bdev", 00:04:07.411 "config": [ 00:04:07.411 { 00:04:07.411 "method": "bdev_set_options", 00:04:07.411 "params": { 00:04:07.411 "bdev_io_pool_size": 65535, 00:04:07.411 "bdev_io_cache_size": 256, 00:04:07.412 "bdev_auto_examine": true, 00:04:07.412 "iobuf_small_cache_size": 128, 00:04:07.412 "iobuf_large_cache_size": 16 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "bdev_raid_set_options", 00:04:07.412 "params": { 00:04:07.412 "process_window_size_kb": 1024, 00:04:07.412 "process_max_bandwidth_mb_sec": 0 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "bdev_iscsi_set_options", 00:04:07.412 "params": { 00:04:07.412 "timeout_sec": 30 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "bdev_nvme_set_options", 00:04:07.412 "params": { 00:04:07.412 "action_on_timeout": "none", 00:04:07.412 "timeout_us": 0, 00:04:07.412 "timeout_admin_us": 0, 00:04:07.412 "keep_alive_timeout_ms": 10000, 00:04:07.412 "arbitration_burst": 0, 00:04:07.412 "low_priority_weight": 0, 00:04:07.412 "medium_priority_weight": 0, 00:04:07.412 "high_priority_weight": 0, 00:04:07.412 "nvme_adminq_poll_period_us": 10000, 00:04:07.412 "nvme_ioq_poll_period_us": 0, 00:04:07.412 "io_queue_requests": 0, 00:04:07.412 "delay_cmd_submit": true, 00:04:07.412 "transport_retry_count": 4, 00:04:07.412 "bdev_retry_count": 3, 00:04:07.412 "transport_ack_timeout": 0, 00:04:07.412 "ctrlr_loss_timeout_sec": 0, 00:04:07.412 "reconnect_delay_sec": 0, 00:04:07.412 "fast_io_fail_timeout_sec": 0, 00:04:07.412 "disable_auto_failback": false, 00:04:07.412 "generate_uuids": false, 00:04:07.412 "transport_tos": 0, 00:04:07.412 "nvme_error_stat": false, 00:04:07.412 "rdma_srq_size": 0, 00:04:07.412 "io_path_stat": false, 00:04:07.412 "allow_accel_sequence": false, 00:04:07.412 "rdma_max_cq_size": 0, 00:04:07.412 "rdma_cm_event_timeout_ms": 0, 00:04:07.412 "dhchap_digests": [ 00:04:07.412 "sha256", 00:04:07.412 "sha384", 00:04:07.412 "sha512" 00:04:07.412 ], 00:04:07.412 "dhchap_dhgroups": [ 00:04:07.412 "null", 00:04:07.412 "ffdhe2048", 00:04:07.412 "ffdhe3072", 00:04:07.412 "ffdhe4096", 00:04:07.412 "ffdhe6144", 00:04:07.412 "ffdhe8192" 00:04:07.412 ] 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "bdev_nvme_set_hotplug", 00:04:07.412 "params": { 00:04:07.412 "period_us": 100000, 00:04:07.412 "enable": false 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "bdev_wait_for_examine" 00:04:07.412 } 00:04:07.412 ] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "scsi", 00:04:07.412 "config": null 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "scheduler", 00:04:07.412 "config": [ 00:04:07.412 { 00:04:07.412 "method": "framework_set_scheduler", 00:04:07.412 "params": { 00:04:07.412 "name": "static" 00:04:07.412 } 00:04:07.412 } 00:04:07.412 ] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "vhost_scsi", 00:04:07.412 "config": [] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "vhost_blk", 00:04:07.412 "config": [] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "ublk", 00:04:07.412 "config": [] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "nbd", 00:04:07.412 "config": [] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "nvmf", 00:04:07.412 "config": [ 00:04:07.412 { 00:04:07.412 "method": "nvmf_set_config", 00:04:07.412 "params": { 00:04:07.412 "discovery_filter": "match_any", 00:04:07.412 "admin_cmd_passthru": { 00:04:07.412 "identify_ctrlr": false 00:04:07.412 }, 00:04:07.412 "dhchap_digests": [ 00:04:07.412 "sha256", 00:04:07.412 "sha384", 00:04:07.412 "sha512" 00:04:07.412 ], 00:04:07.412 "dhchap_dhgroups": [ 00:04:07.412 "null", 00:04:07.412 "ffdhe2048", 00:04:07.412 "ffdhe3072", 00:04:07.412 "ffdhe4096", 00:04:07.412 "ffdhe6144", 00:04:07.412 "ffdhe8192" 00:04:07.412 ] 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "nvmf_set_max_subsystems", 00:04:07.412 "params": { 00:04:07.412 "max_subsystems": 1024 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "nvmf_set_crdt", 00:04:07.412 "params": { 00:04:07.412 "crdt1": 0, 00:04:07.412 "crdt2": 0, 00:04:07.412 "crdt3": 0 00:04:07.412 } 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "method": "nvmf_create_transport", 00:04:07.412 "params": { 00:04:07.412 "trtype": "TCP", 00:04:07.412 "max_queue_depth": 128, 00:04:07.412 "max_io_qpairs_per_ctrlr": 127, 00:04:07.412 "in_capsule_data_size": 4096, 00:04:07.412 "max_io_size": 131072, 00:04:07.412 "io_unit_size": 131072, 00:04:07.412 "max_aq_depth": 128, 00:04:07.412 "num_shared_buffers": 511, 00:04:07.412 "buf_cache_size": 4294967295, 00:04:07.412 "dif_insert_or_strip": false, 00:04:07.412 "zcopy": false, 00:04:07.412 "c2h_success": true, 00:04:07.412 "sock_priority": 0, 00:04:07.412 "abort_timeout_sec": 1, 00:04:07.412 "ack_timeout": 0, 00:04:07.412 "data_wr_pool_size": 0 00:04:07.412 } 00:04:07.412 } 00:04:07.412 ] 00:04:07.412 }, 00:04:07.412 { 00:04:07.412 "subsystem": "iscsi", 00:04:07.412 "config": [ 00:04:07.412 { 00:04:07.412 "method": "iscsi_set_options", 00:04:07.412 "params": { 00:04:07.412 "node_base": "iqn.2016-06.io.spdk", 00:04:07.412 "max_sessions": 128, 00:04:07.412 "max_connections_per_session": 2, 00:04:07.412 "max_queue_depth": 64, 00:04:07.412 "default_time2wait": 2, 00:04:07.412 "default_time2retain": 20, 00:04:07.412 "first_burst_length": 8192, 00:04:07.412 "immediate_data": true, 00:04:07.412 "allow_duplicated_isid": false, 00:04:07.412 "error_recovery_level": 0, 00:04:07.412 "nop_timeout": 60, 00:04:07.412 "nop_in_interval": 30, 00:04:07.412 "disable_chap": false, 00:04:07.412 "require_chap": false, 00:04:07.412 "mutual_chap": false, 00:04:07.412 "chap_group": 0, 00:04:07.412 "max_large_datain_per_connection": 64, 00:04:07.412 "max_r2t_per_connection": 4, 00:04:07.412 "pdu_pool_size": 36864, 00:04:07.412 "immediate_data_pool_size": 16384, 00:04:07.412 "data_out_pool_size": 2048 00:04:07.412 } 00:04:07.412 } 00:04:07.412 ] 00:04:07.412 } 00:04:07.412 ] 00:04:07.412 } 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57392 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57392 ']' 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57392 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57392 00:04:07.412 killing process with pid 57392 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57392' 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57392 00:04:07.412 10:32:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57392 00:04:08.788 10:32:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57426 00:04:08.788 10:32:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:08.788 10:32:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57426 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57426 ']' 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57426 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57426 00:04:14.050 killing process with pid 57426 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57426' 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57426 00:04:14.050 10:32:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57426 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:14.984 ************************************ 00:04:14.984 END TEST skip_rpc_with_json 00:04:14.984 ************************************ 00:04:14.984 00:04:14.984 real 0m8.478s 00:04:14.984 user 0m8.135s 00:04:14.984 sys 0m0.561s 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.984 10:32:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.984 ************************************ 00:04:14.984 START TEST skip_rpc_with_delay 00:04:14.984 ************************************ 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.984 [2024-11-18 10:32:40.631745] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.984 00:04:14.984 real 0m0.122s 00:04:14.984 user 0m0.065s 00:04:14.984 sys 0m0.055s 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.984 ************************************ 00:04:14.984 END TEST skip_rpc_with_delay 00:04:14.984 ************************************ 00:04:14.984 10:32:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.984 10:32:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:14.984 10:32:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:14.984 10:32:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.984 10:32:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.984 ************************************ 00:04:14.984 START TEST exit_on_failed_rpc_init 00:04:14.984 ************************************ 00:04:14.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57549 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57549 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57549 ']' 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.984 10:32:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.984 [2024-11-18 10:32:40.795638] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:14.984 [2024-11-18 10:32:40.795954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57549 ] 00:04:15.243 [2024-11-18 10:32:40.941890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.243 [2024-11-18 10:32:41.022643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:15.810 10:32:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.810 [2024-11-18 10:32:41.659812] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:15.810 [2024-11-18 10:32:41.659935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57561 ] 00:04:16.069 [2024-11-18 10:32:41.810557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.069 [2024-11-18 10:32:41.908694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.069 [2024-11-18 10:32:41.908780] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:16.069 [2024-11-18 10:32:41.908794] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:16.069 [2024-11-18 10:32:41.908806] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57549 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57549 ']' 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57549 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57549 00:04:16.328 killing process with pid 57549 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57549' 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57549 00:04:16.328 10:32:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57549 00:04:17.703 ************************************ 00:04:17.703 END TEST exit_on_failed_rpc_init 00:04:17.703 ************************************ 00:04:17.703 00:04:17.703 real 0m2.568s 00:04:17.703 user 0m2.857s 00:04:17.703 sys 0m0.376s 00:04:17.703 10:32:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.703 10:32:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.703 10:32:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.703 00:04:17.703 real 0m17.692s 00:04:17.703 user 0m17.031s 00:04:17.703 sys 0m1.437s 00:04:17.703 10:32:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.703 10:32:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.703 ************************************ 00:04:17.703 END TEST skip_rpc 00:04:17.703 ************************************ 00:04:17.703 10:32:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.703 10:32:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.703 10:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.703 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.703 ************************************ 00:04:17.703 START TEST rpc_client 00:04:17.703 ************************************ 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.703 * Looking for test storage... 00:04:17.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.703 10:32:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.703 10:32:43 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.704 --rc genhtml_branch_coverage=1 00:04:17.704 --rc genhtml_function_coverage=1 00:04:17.704 --rc genhtml_legend=1 00:04:17.704 --rc geninfo_all_blocks=1 00:04:17.704 --rc geninfo_unexecuted_blocks=1 00:04:17.704 00:04:17.704 ' 00:04:17.704 10:32:43 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.704 --rc genhtml_branch_coverage=1 00:04:17.704 --rc genhtml_function_coverage=1 00:04:17.704 --rc genhtml_legend=1 00:04:17.704 --rc geninfo_all_blocks=1 00:04:17.704 --rc geninfo_unexecuted_blocks=1 00:04:17.704 00:04:17.704 ' 00:04:17.704 10:32:43 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.704 --rc genhtml_branch_coverage=1 00:04:17.704 --rc genhtml_function_coverage=1 00:04:17.704 --rc genhtml_legend=1 00:04:17.704 --rc geninfo_all_blocks=1 00:04:17.704 --rc geninfo_unexecuted_blocks=1 00:04:17.704 00:04:17.704 ' 00:04:17.704 10:32:43 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.704 --rc genhtml_branch_coverage=1 00:04:17.704 --rc genhtml_function_coverage=1 00:04:17.704 --rc genhtml_legend=1 00:04:17.704 --rc geninfo_all_blocks=1 00:04:17.704 --rc geninfo_unexecuted_blocks=1 00:04:17.704 00:04:17.704 ' 00:04:17.704 10:32:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:17.704 OK 00:04:17.704 10:32:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.704 ************************************ 00:04:17.704 END TEST rpc_client 00:04:17.704 ************************************ 00:04:17.704 00:04:17.704 real 0m0.186s 00:04:17.704 user 0m0.103s 00:04:17.704 sys 0m0.084s 00:04:17.704 10:32:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.704 10:32:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.963 10:32:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.963 10:32:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.963 10:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.963 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.963 ************************************ 00:04:17.963 START TEST json_config 00:04:17.963 ************************************ 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.963 10:32:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.963 10:32:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.963 10:32:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.963 10:32:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.963 10:32:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.963 10:32:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:17.963 10:32:43 json_config -- scripts/common.sh@345 -- # : 1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.963 10:32:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.963 10:32:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@353 -- # local d=1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.963 10:32:43 json_config -- scripts/common.sh@355 -- # echo 1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.963 10:32:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@353 -- # local d=2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.963 10:32:43 json_config -- scripts/common.sh@355 -- # echo 2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.963 10:32:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.963 10:32:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.963 10:32:43 json_config -- scripts/common.sh@368 -- # return 0 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.963 --rc genhtml_branch_coverage=1 00:04:17.963 --rc genhtml_function_coverage=1 00:04:17.963 --rc genhtml_legend=1 00:04:17.963 --rc geninfo_all_blocks=1 00:04:17.963 --rc geninfo_unexecuted_blocks=1 00:04:17.963 00:04:17.963 ' 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.963 --rc genhtml_branch_coverage=1 00:04:17.963 --rc genhtml_function_coverage=1 00:04:17.963 --rc genhtml_legend=1 00:04:17.963 --rc geninfo_all_blocks=1 00:04:17.963 --rc geninfo_unexecuted_blocks=1 00:04:17.963 00:04:17.963 ' 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.963 --rc genhtml_branch_coverage=1 00:04:17.963 --rc genhtml_function_coverage=1 00:04:17.963 --rc genhtml_legend=1 00:04:17.963 --rc geninfo_all_blocks=1 00:04:17.963 --rc geninfo_unexecuted_blocks=1 00:04:17.963 00:04:17.963 ' 00:04:17.963 10:32:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.963 --rc genhtml_branch_coverage=1 00:04:17.963 --rc genhtml_function_coverage=1 00:04:17.963 --rc genhtml_legend=1 00:04:17.963 --rc geninfo_all_blocks=1 00:04:17.963 --rc geninfo_unexecuted_blocks=1 00:04:17.963 00:04:17.963 ' 00:04:17.963 10:32:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:17.963 10:32:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:17.963 10:32:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.963 10:32:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.963 10:32:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.963 10:32:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.963 10:32:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.963 10:32:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.963 10:32:43 json_config -- paths/export.sh@5 -- # export PATH 00:04:17.963 10:32:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@51 -- # : 0 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:17.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:17.963 10:32:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:17.963 10:32:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:17.963 10:32:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:17.963 10:32:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:17.964 10:32:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:17.964 10:32:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:17.964 10:32:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:17.964 WARNING: No tests are enabled so not running JSON configuration tests 00:04:17.964 10:32:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:17.964 00:04:17.964 real 0m0.153s 00:04:17.964 user 0m0.094s 00:04:17.964 sys 0m0.056s 00:04:17.964 10:32:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.964 10:32:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.964 ************************************ 00:04:17.964 END TEST json_config 00:04:17.964 ************************************ 00:04:17.964 10:32:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:17.964 10:32:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.964 10:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.964 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.964 ************************************ 00:04:17.964 START TEST json_config_extra_key 00:04:17.964 ************************************ 00:04:17.964 10:32:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.223 --rc genhtml_branch_coverage=1 00:04:18.223 --rc genhtml_function_coverage=1 00:04:18.223 --rc genhtml_legend=1 00:04:18.223 --rc geninfo_all_blocks=1 00:04:18.223 --rc geninfo_unexecuted_blocks=1 00:04:18.223 00:04:18.223 ' 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.223 --rc genhtml_branch_coverage=1 00:04:18.223 --rc genhtml_function_coverage=1 00:04:18.223 --rc genhtml_legend=1 00:04:18.223 --rc geninfo_all_blocks=1 00:04:18.223 --rc geninfo_unexecuted_blocks=1 00:04:18.223 00:04:18.223 ' 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.223 --rc genhtml_branch_coverage=1 00:04:18.223 --rc genhtml_function_coverage=1 00:04:18.223 --rc genhtml_legend=1 00:04:18.223 --rc geninfo_all_blocks=1 00:04:18.223 --rc geninfo_unexecuted_blocks=1 00:04:18.223 00:04:18.223 ' 00:04:18.223 10:32:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.223 --rc genhtml_branch_coverage=1 00:04:18.223 --rc genhtml_function_coverage=1 00:04:18.223 --rc genhtml_legend=1 00:04:18.223 --rc geninfo_all_blocks=1 00:04:18.223 --rc geninfo_unexecuted_blocks=1 00:04:18.223 00:04:18.223 ' 00:04:18.223 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=db5d57a9-13fc-4a19-8606-73dd9425ba6b 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.223 10:32:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.223 10:32:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.224 10:32:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.224 10:32:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.224 10:32:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.224 10:32:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.224 10:32:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.224 10:32:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:18.224 10:32:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.224 10:32:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:18.224 INFO: launching applications... 00:04:18.224 10:32:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57755 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.224 Waiting for target to run... 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57755 /var/tmp/spdk_tgt.sock 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57755 ']' 00:04:18.224 10:32:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:18.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.224 10:32:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:18.224 [2024-11-18 10:32:44.026805] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:18.224 [2024-11-18 10:32:44.027064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57755 ] 00:04:18.483 [2024-11-18 10:32:44.338669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.740 [2024-11-18 10:32:44.425702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.306 00:04:19.306 INFO: shutting down applications... 00:04:19.306 10:32:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.306 10:32:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:19.306 10:32:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:19.306 10:32:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57755 ]] 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57755 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57755 00:04:19.306 10:32:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.564 10:32:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.564 10:32:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.564 10:32:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57755 00:04:19.564 10:32:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.130 10:32:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.130 10:32:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.130 10:32:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57755 00:04:20.130 10:32:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.696 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.696 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.696 10:32:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57755 00:04:20.696 10:32:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.263 SPDK target shutdown done 00:04:21.263 Success 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57755 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.263 10:32:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.263 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:21.263 00:04:21.263 real 0m3.126s 00:04:21.263 user 0m2.631s 00:04:21.263 sys 0m0.398s 00:04:21.263 ************************************ 00:04:21.263 END TEST json_config_extra_key 00:04:21.263 ************************************ 00:04:21.263 10:32:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.263 10:32:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.263 10:32:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.263 10:32:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.263 10:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.263 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:21.263 ************************************ 00:04:21.263 START TEST alias_rpc 00:04:21.263 ************************************ 00:04:21.263 10:32:46 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.264 * Looking for test storage... 00:04:21.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.264 10:32:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.264 --rc genhtml_branch_coverage=1 00:04:21.264 --rc genhtml_function_coverage=1 00:04:21.264 --rc genhtml_legend=1 00:04:21.264 --rc geninfo_all_blocks=1 00:04:21.264 --rc geninfo_unexecuted_blocks=1 00:04:21.264 00:04:21.264 ' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.264 --rc genhtml_branch_coverage=1 00:04:21.264 --rc genhtml_function_coverage=1 00:04:21.264 --rc genhtml_legend=1 00:04:21.264 --rc geninfo_all_blocks=1 00:04:21.264 --rc geninfo_unexecuted_blocks=1 00:04:21.264 00:04:21.264 ' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.264 --rc genhtml_branch_coverage=1 00:04:21.264 --rc genhtml_function_coverage=1 00:04:21.264 --rc genhtml_legend=1 00:04:21.264 --rc geninfo_all_blocks=1 00:04:21.264 --rc geninfo_unexecuted_blocks=1 00:04:21.264 00:04:21.264 ' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.264 --rc genhtml_branch_coverage=1 00:04:21.264 --rc genhtml_function_coverage=1 00:04:21.264 --rc genhtml_legend=1 00:04:21.264 --rc geninfo_all_blocks=1 00:04:21.264 --rc geninfo_unexecuted_blocks=1 00:04:21.264 00:04:21.264 ' 00:04:21.264 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.264 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57848 00:04:21.264 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57848 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57848 ']' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.264 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.264 10:32:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.522 [2024-11-18 10:32:47.179529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:21.522 [2024-11-18 10:32:47.179816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57848 ] 00:04:21.522 [2024-11-18 10:32:47.337469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.781 [2024-11-18 10:32:47.413102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.349 10:32:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.349 10:32:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.349 10:32:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:22.607 10:32:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57848 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57848 ']' 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57848 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57848 00:04:22.607 killing process with pid 57848 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57848' 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 57848 00:04:22.607 10:32:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 57848 00:04:23.988 ************************************ 00:04:23.988 END TEST alias_rpc 00:04:23.988 ************************************ 00:04:23.988 00:04:23.988 real 0m2.450s 00:04:23.988 user 0m2.573s 00:04:23.988 sys 0m0.374s 00:04:23.988 10:32:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.988 10:32:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.988 10:32:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:23.988 10:32:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:23.988 10:32:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.988 10:32:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.988 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:04:23.988 ************************************ 00:04:23.988 START TEST spdkcli_tcp 00:04:23.988 ************************************ 00:04:23.988 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:23.988 * Looking for test storage... 00:04:23.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:23.988 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.988 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.988 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.989 10:32:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.989 --rc genhtml_branch_coverage=1 00:04:23.989 --rc genhtml_function_coverage=1 00:04:23.989 --rc genhtml_legend=1 00:04:23.989 --rc geninfo_all_blocks=1 00:04:23.989 --rc geninfo_unexecuted_blocks=1 00:04:23.989 00:04:23.989 ' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.989 --rc genhtml_branch_coverage=1 00:04:23.989 --rc genhtml_function_coverage=1 00:04:23.989 --rc genhtml_legend=1 00:04:23.989 --rc geninfo_all_blocks=1 00:04:23.989 --rc geninfo_unexecuted_blocks=1 00:04:23.989 00:04:23.989 ' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.989 --rc genhtml_branch_coverage=1 00:04:23.989 --rc genhtml_function_coverage=1 00:04:23.989 --rc genhtml_legend=1 00:04:23.989 --rc geninfo_all_blocks=1 00:04:23.989 --rc geninfo_unexecuted_blocks=1 00:04:23.989 00:04:23.989 ' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.989 --rc genhtml_branch_coverage=1 00:04:23.989 --rc genhtml_function_coverage=1 00:04:23.989 --rc genhtml_legend=1 00:04:23.989 --rc geninfo_all_blocks=1 00:04:23.989 --rc geninfo_unexecuted_blocks=1 00:04:23.989 00:04:23.989 ' 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57938 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57938 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57938 ']' 00:04:23.989 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.989 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.989 [2024-11-18 10:32:49.705871] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:23.989 [2024-11-18 10:32:49.706134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:04:23.989 [2024-11-18 10:32:49.860456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.248 [2024-11-18 10:32:49.936883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.248 [2024-11-18 10:32:49.936951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.814 10:32:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.814 10:32:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:24.814 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57955 00:04:24.814 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.814 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:25.076 [ 00:04:25.076 "bdev_malloc_delete", 00:04:25.076 "bdev_malloc_create", 00:04:25.076 "bdev_null_resize", 00:04:25.076 "bdev_null_delete", 00:04:25.076 "bdev_null_create", 00:04:25.076 "bdev_nvme_cuse_unregister", 00:04:25.076 "bdev_nvme_cuse_register", 00:04:25.076 "bdev_opal_new_user", 00:04:25.076 "bdev_opal_set_lock_state", 00:04:25.076 "bdev_opal_delete", 00:04:25.076 "bdev_opal_get_info", 00:04:25.076 "bdev_opal_create", 00:04:25.076 "bdev_nvme_opal_revert", 00:04:25.076 "bdev_nvme_opal_init", 00:04:25.076 "bdev_nvme_send_cmd", 00:04:25.076 "bdev_nvme_set_keys", 00:04:25.076 "bdev_nvme_get_path_iostat", 00:04:25.076 "bdev_nvme_get_mdns_discovery_info", 00:04:25.076 "bdev_nvme_stop_mdns_discovery", 00:04:25.076 "bdev_nvme_start_mdns_discovery", 00:04:25.076 "bdev_nvme_set_multipath_policy", 00:04:25.076 "bdev_nvme_set_preferred_path", 00:04:25.076 "bdev_nvme_get_io_paths", 00:04:25.076 "bdev_nvme_remove_error_injection", 00:04:25.076 "bdev_nvme_add_error_injection", 00:04:25.076 "bdev_nvme_get_discovery_info", 00:04:25.076 "bdev_nvme_stop_discovery", 00:04:25.076 "bdev_nvme_start_discovery", 00:04:25.076 "bdev_nvme_get_controller_health_info", 00:04:25.076 "bdev_nvme_disable_controller", 00:04:25.076 "bdev_nvme_enable_controller", 00:04:25.076 "bdev_nvme_reset_controller", 00:04:25.076 "bdev_nvme_get_transport_statistics", 00:04:25.076 "bdev_nvme_apply_firmware", 00:04:25.076 "bdev_nvme_detach_controller", 00:04:25.076 "bdev_nvme_get_controllers", 00:04:25.076 "bdev_nvme_attach_controller", 00:04:25.076 "bdev_nvme_set_hotplug", 00:04:25.077 "bdev_nvme_set_options", 00:04:25.077 "bdev_passthru_delete", 00:04:25.077 "bdev_passthru_create", 00:04:25.077 "bdev_lvol_set_parent_bdev", 00:04:25.077 "bdev_lvol_set_parent", 00:04:25.077 "bdev_lvol_check_shallow_copy", 00:04:25.077 "bdev_lvol_start_shallow_copy", 00:04:25.077 "bdev_lvol_grow_lvstore", 00:04:25.077 "bdev_lvol_get_lvols", 00:04:25.077 "bdev_lvol_get_lvstores", 00:04:25.077 "bdev_lvol_delete", 00:04:25.077 "bdev_lvol_set_read_only", 00:04:25.077 "bdev_lvol_resize", 00:04:25.077 "bdev_lvol_decouple_parent", 00:04:25.077 "bdev_lvol_inflate", 00:04:25.077 "bdev_lvol_rename", 00:04:25.077 "bdev_lvol_clone_bdev", 00:04:25.077 "bdev_lvol_clone", 00:04:25.077 "bdev_lvol_snapshot", 00:04:25.077 "bdev_lvol_create", 00:04:25.077 "bdev_lvol_delete_lvstore", 00:04:25.077 "bdev_lvol_rename_lvstore", 00:04:25.077 "bdev_lvol_create_lvstore", 00:04:25.077 "bdev_raid_set_options", 00:04:25.077 "bdev_raid_remove_base_bdev", 00:04:25.077 "bdev_raid_add_base_bdev", 00:04:25.077 "bdev_raid_delete", 00:04:25.077 "bdev_raid_create", 00:04:25.077 "bdev_raid_get_bdevs", 00:04:25.077 "bdev_error_inject_error", 00:04:25.077 "bdev_error_delete", 00:04:25.077 "bdev_error_create", 00:04:25.077 "bdev_split_delete", 00:04:25.077 "bdev_split_create", 00:04:25.077 "bdev_delay_delete", 00:04:25.077 "bdev_delay_create", 00:04:25.077 "bdev_delay_update_latency", 00:04:25.077 "bdev_zone_block_delete", 00:04:25.077 "bdev_zone_block_create", 00:04:25.077 "blobfs_create", 00:04:25.077 "blobfs_detect", 00:04:25.077 "blobfs_set_cache_size", 00:04:25.077 "bdev_xnvme_delete", 00:04:25.077 "bdev_xnvme_create", 00:04:25.077 "bdev_aio_delete", 00:04:25.077 "bdev_aio_rescan", 00:04:25.077 "bdev_aio_create", 00:04:25.077 "bdev_ftl_set_property", 00:04:25.077 "bdev_ftl_get_properties", 00:04:25.077 "bdev_ftl_get_stats", 00:04:25.077 "bdev_ftl_unmap", 00:04:25.077 "bdev_ftl_unload", 00:04:25.077 "bdev_ftl_delete", 00:04:25.077 "bdev_ftl_load", 00:04:25.077 "bdev_ftl_create", 00:04:25.077 "bdev_virtio_attach_controller", 00:04:25.077 "bdev_virtio_scsi_get_devices", 00:04:25.077 "bdev_virtio_detach_controller", 00:04:25.077 "bdev_virtio_blk_set_hotplug", 00:04:25.077 "bdev_iscsi_delete", 00:04:25.077 "bdev_iscsi_create", 00:04:25.077 "bdev_iscsi_set_options", 00:04:25.077 "accel_error_inject_error", 00:04:25.077 "ioat_scan_accel_module", 00:04:25.077 "dsa_scan_accel_module", 00:04:25.077 "iaa_scan_accel_module", 00:04:25.077 "keyring_file_remove_key", 00:04:25.077 "keyring_file_add_key", 00:04:25.077 "keyring_linux_set_options", 00:04:25.077 "fsdev_aio_delete", 00:04:25.077 "fsdev_aio_create", 00:04:25.077 "iscsi_get_histogram", 00:04:25.077 "iscsi_enable_histogram", 00:04:25.077 "iscsi_set_options", 00:04:25.077 "iscsi_get_auth_groups", 00:04:25.077 "iscsi_auth_group_remove_secret", 00:04:25.077 "iscsi_auth_group_add_secret", 00:04:25.077 "iscsi_delete_auth_group", 00:04:25.077 "iscsi_create_auth_group", 00:04:25.077 "iscsi_set_discovery_auth", 00:04:25.077 "iscsi_get_options", 00:04:25.077 "iscsi_target_node_request_logout", 00:04:25.077 "iscsi_target_node_set_redirect", 00:04:25.077 "iscsi_target_node_set_auth", 00:04:25.077 "iscsi_target_node_add_lun", 00:04:25.077 "iscsi_get_stats", 00:04:25.077 "iscsi_get_connections", 00:04:25.077 "iscsi_portal_group_set_auth", 00:04:25.077 "iscsi_start_portal_group", 00:04:25.077 "iscsi_delete_portal_group", 00:04:25.077 "iscsi_create_portal_group", 00:04:25.077 "iscsi_get_portal_groups", 00:04:25.077 "iscsi_delete_target_node", 00:04:25.077 "iscsi_target_node_remove_pg_ig_maps", 00:04:25.077 "iscsi_target_node_add_pg_ig_maps", 00:04:25.077 "iscsi_create_target_node", 00:04:25.077 "iscsi_get_target_nodes", 00:04:25.077 "iscsi_delete_initiator_group", 00:04:25.077 "iscsi_initiator_group_remove_initiators", 00:04:25.077 "iscsi_initiator_group_add_initiators", 00:04:25.077 "iscsi_create_initiator_group", 00:04:25.077 "iscsi_get_initiator_groups", 00:04:25.077 "nvmf_set_crdt", 00:04:25.077 "nvmf_set_config", 00:04:25.077 "nvmf_set_max_subsystems", 00:04:25.077 "nvmf_stop_mdns_prr", 00:04:25.077 "nvmf_publish_mdns_prr", 00:04:25.077 "nvmf_subsystem_get_listeners", 00:04:25.077 "nvmf_subsystem_get_qpairs", 00:04:25.077 "nvmf_subsystem_get_controllers", 00:04:25.077 "nvmf_get_stats", 00:04:25.077 "nvmf_get_transports", 00:04:25.077 "nvmf_create_transport", 00:04:25.077 "nvmf_get_targets", 00:04:25.077 "nvmf_delete_target", 00:04:25.077 "nvmf_create_target", 00:04:25.077 "nvmf_subsystem_allow_any_host", 00:04:25.077 "nvmf_subsystem_set_keys", 00:04:25.077 "nvmf_subsystem_remove_host", 00:04:25.077 "nvmf_subsystem_add_host", 00:04:25.077 "nvmf_ns_remove_host", 00:04:25.077 "nvmf_ns_add_host", 00:04:25.077 "nvmf_subsystem_remove_ns", 00:04:25.077 "nvmf_subsystem_set_ns_ana_group", 00:04:25.077 "nvmf_subsystem_add_ns", 00:04:25.077 "nvmf_subsystem_listener_set_ana_state", 00:04:25.077 "nvmf_discovery_get_referrals", 00:04:25.077 "nvmf_discovery_remove_referral", 00:04:25.077 "nvmf_discovery_add_referral", 00:04:25.077 "nvmf_subsystem_remove_listener", 00:04:25.077 "nvmf_subsystem_add_listener", 00:04:25.077 "nvmf_delete_subsystem", 00:04:25.077 "nvmf_create_subsystem", 00:04:25.077 "nvmf_get_subsystems", 00:04:25.077 "env_dpdk_get_mem_stats", 00:04:25.077 "nbd_get_disks", 00:04:25.077 "nbd_stop_disk", 00:04:25.077 "nbd_start_disk", 00:04:25.077 "ublk_recover_disk", 00:04:25.077 "ublk_get_disks", 00:04:25.077 "ublk_stop_disk", 00:04:25.077 "ublk_start_disk", 00:04:25.077 "ublk_destroy_target", 00:04:25.077 "ublk_create_target", 00:04:25.077 "virtio_blk_create_transport", 00:04:25.077 "virtio_blk_get_transports", 00:04:25.077 "vhost_controller_set_coalescing", 00:04:25.077 "vhost_get_controllers", 00:04:25.077 "vhost_delete_controller", 00:04:25.077 "vhost_create_blk_controller", 00:04:25.077 "vhost_scsi_controller_remove_target", 00:04:25.077 "vhost_scsi_controller_add_target", 00:04:25.077 "vhost_start_scsi_controller", 00:04:25.077 "vhost_create_scsi_controller", 00:04:25.077 "thread_set_cpumask", 00:04:25.077 "scheduler_set_options", 00:04:25.077 "framework_get_governor", 00:04:25.077 "framework_get_scheduler", 00:04:25.077 "framework_set_scheduler", 00:04:25.077 "framework_get_reactors", 00:04:25.077 "thread_get_io_channels", 00:04:25.077 "thread_get_pollers", 00:04:25.077 "thread_get_stats", 00:04:25.077 "framework_monitor_context_switch", 00:04:25.077 "spdk_kill_instance", 00:04:25.077 "log_enable_timestamps", 00:04:25.077 "log_get_flags", 00:04:25.077 "log_clear_flag", 00:04:25.077 "log_set_flag", 00:04:25.077 "log_get_level", 00:04:25.077 "log_set_level", 00:04:25.077 "log_get_print_level", 00:04:25.077 "log_set_print_level", 00:04:25.077 "framework_enable_cpumask_locks", 00:04:25.077 "framework_disable_cpumask_locks", 00:04:25.077 "framework_wait_init", 00:04:25.077 "framework_start_init", 00:04:25.077 "scsi_get_devices", 00:04:25.077 "bdev_get_histogram", 00:04:25.077 "bdev_enable_histogram", 00:04:25.077 "bdev_set_qos_limit", 00:04:25.077 "bdev_set_qd_sampling_period", 00:04:25.078 "bdev_get_bdevs", 00:04:25.078 "bdev_reset_iostat", 00:04:25.078 "bdev_get_iostat", 00:04:25.078 "bdev_examine", 00:04:25.078 "bdev_wait_for_examine", 00:04:25.078 "bdev_set_options", 00:04:25.078 "accel_get_stats", 00:04:25.078 "accel_set_options", 00:04:25.078 "accel_set_driver", 00:04:25.078 "accel_crypto_key_destroy", 00:04:25.078 "accel_crypto_keys_get", 00:04:25.078 "accel_crypto_key_create", 00:04:25.078 "accel_assign_opc", 00:04:25.078 "accel_get_module_info", 00:04:25.078 "accel_get_opc_assignments", 00:04:25.078 "vmd_rescan", 00:04:25.078 "vmd_remove_device", 00:04:25.078 "vmd_enable", 00:04:25.078 "sock_get_default_impl", 00:04:25.078 "sock_set_default_impl", 00:04:25.078 "sock_impl_set_options", 00:04:25.078 "sock_impl_get_options", 00:04:25.078 "iobuf_get_stats", 00:04:25.078 "iobuf_set_options", 00:04:25.078 "keyring_get_keys", 00:04:25.078 "framework_get_pci_devices", 00:04:25.078 "framework_get_config", 00:04:25.078 "framework_get_subsystems", 00:04:25.078 "fsdev_set_opts", 00:04:25.078 "fsdev_get_opts", 00:04:25.078 "trace_get_info", 00:04:25.078 "trace_get_tpoint_group_mask", 00:04:25.078 "trace_disable_tpoint_group", 00:04:25.078 "trace_enable_tpoint_group", 00:04:25.078 "trace_clear_tpoint_mask", 00:04:25.078 "trace_set_tpoint_mask", 00:04:25.078 "notify_get_notifications", 00:04:25.078 "notify_get_types", 00:04:25.078 "spdk_get_version", 00:04:25.078 "rpc_get_methods" 00:04:25.078 ] 00:04:25.078 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.078 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:25.078 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57938 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57938 ']' 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57938 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57938 00:04:25.078 killing process with pid 57938 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57938' 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57938 00:04:25.078 10:32:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57938 00:04:26.464 ************************************ 00:04:26.464 END TEST spdkcli_tcp 00:04:26.464 ************************************ 00:04:26.464 00:04:26.464 real 0m2.474s 00:04:26.464 user 0m4.452s 00:04:26.464 sys 0m0.411s 00:04:26.464 10:32:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.464 10:32:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.464 10:32:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.464 10:32:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.464 10:32:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.464 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:04:26.464 ************************************ 00:04:26.464 START TEST dpdk_mem_utility 00:04:26.464 ************************************ 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.464 * Looking for test storage... 00:04:26.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:26.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.464 10:32:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.464 --rc genhtml_branch_coverage=1 00:04:26.464 --rc genhtml_function_coverage=1 00:04:26.464 --rc genhtml_legend=1 00:04:26.464 --rc geninfo_all_blocks=1 00:04:26.464 --rc geninfo_unexecuted_blocks=1 00:04:26.464 00:04:26.464 ' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.464 --rc genhtml_branch_coverage=1 00:04:26.464 --rc genhtml_function_coverage=1 00:04:26.464 --rc genhtml_legend=1 00:04:26.464 --rc geninfo_all_blocks=1 00:04:26.464 --rc geninfo_unexecuted_blocks=1 00:04:26.464 00:04:26.464 ' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.464 --rc genhtml_branch_coverage=1 00:04:26.464 --rc genhtml_function_coverage=1 00:04:26.464 --rc genhtml_legend=1 00:04:26.464 --rc geninfo_all_blocks=1 00:04:26.464 --rc geninfo_unexecuted_blocks=1 00:04:26.464 00:04:26.464 ' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.464 --rc genhtml_branch_coverage=1 00:04:26.464 --rc genhtml_function_coverage=1 00:04:26.464 --rc genhtml_legend=1 00:04:26.464 --rc geninfo_all_blocks=1 00:04:26.464 --rc geninfo_unexecuted_blocks=1 00:04:26.464 00:04:26.464 ' 00:04:26.464 10:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:26.464 10:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58044 00:04:26.464 10:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58044 00:04:26.464 10:32:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58044 ']' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.464 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.464 [2024-11-18 10:32:52.224408] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:26.464 [2024-11-18 10:32:52.224527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:04:26.726 [2024-11-18 10:32:52.384927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.726 [2024-11-18 10:32:52.508708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.672 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.672 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:27.672 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:27.672 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:27.672 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.672 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.672 { 00:04:27.672 "filename": "/tmp/spdk_mem_dump.txt" 00:04:27.672 } 00:04:27.672 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.672 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:27.672 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:27.672 1 heaps totaling size 816.000000 MiB 00:04:27.672 size: 816.000000 MiB heap id: 0 00:04:27.672 end heaps---------- 00:04:27.672 9 mempools totaling size 595.772034 MiB 00:04:27.672 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:27.672 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:27.672 size: 92.545471 MiB name: bdev_io_58044 00:04:27.672 size: 50.003479 MiB name: msgpool_58044 00:04:27.672 size: 36.509338 MiB name: fsdev_io_58044 00:04:27.672 size: 21.763794 MiB name: PDU_Pool 00:04:27.672 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:27.672 size: 4.133484 MiB name: evtpool_58044 00:04:27.672 size: 0.026123 MiB name: Session_Pool 00:04:27.672 end mempools------- 00:04:27.672 6 memzones totaling size 4.142822 MiB 00:04:27.672 size: 1.000366 MiB name: RG_ring_0_58044 00:04:27.672 size: 1.000366 MiB name: RG_ring_1_58044 00:04:27.672 size: 1.000366 MiB name: RG_ring_4_58044 00:04:27.672 size: 1.000366 MiB name: RG_ring_5_58044 00:04:27.672 size: 0.125366 MiB name: RG_ring_2_58044 00:04:27.672 size: 0.015991 MiB name: RG_ring_3_58044 00:04:27.672 end memzones------- 00:04:27.672 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:27.672 heap id: 0 total size: 816.000000 MiB number of busy elements: 327 number of free elements: 18 00:04:27.672 list of free elements. size: 16.788452 MiB 00:04:27.672 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:27.672 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:27.672 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:27.672 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:27.672 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:27.672 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:27.672 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:27.672 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:27.672 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:27.672 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:27.672 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:27.672 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:04:27.672 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:27.672 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:27.672 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:27.672 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:27.672 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:27.672 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:27.672 list of standard malloc elements. size: 199.290649 MiB 00:04:27.672 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:27.672 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:27.672 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:27.672 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:27.672 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:27.672 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:27.672 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:27.672 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:27.672 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:27.672 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:27.672 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:27.672 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:27.672 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:27.672 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:27.672 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:27.673 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:27.673 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:27.673 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:27.674 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:27.674 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:27.674 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:27.674 list of memzone associated elements. size: 599.920898 MiB 00:04:27.674 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:27.674 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:27.674 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:27.674 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:27.674 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:27.674 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58044_0 00:04:27.674 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:27.674 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58044_0 00:04:27.674 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:27.674 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58044_0 00:04:27.674 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:27.674 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:27.674 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:27.674 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:27.674 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:27.674 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58044_0 00:04:27.674 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:27.674 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58044 00:04:27.674 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:27.674 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58044 00:04:27.674 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:27.674 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:27.674 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:27.674 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:27.674 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:27.674 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:27.674 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:27.674 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:27.674 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:27.674 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58044 00:04:27.674 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:27.674 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58044 00:04:27.674 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:27.674 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58044 00:04:27.674 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:27.674 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58044 00:04:27.674 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:27.674 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58044 00:04:27.674 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:27.674 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58044 00:04:27.674 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:27.674 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:27.674 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:27.674 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:27.674 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:27.674 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:27.674 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:27.674 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58044 00:04:27.674 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:27.674 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58044 00:04:27.674 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:27.674 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:27.674 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:27.674 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:27.674 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:27.674 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58044 00:04:27.674 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:27.674 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:27.674 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:27.674 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58044 00:04:27.674 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:27.674 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58044 00:04:27.675 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:27.675 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58044 00:04:27.675 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:27.675 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:27.675 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:27.675 10:32:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58044 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58044 ']' 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58044 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58044 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.675 killing process with pid 58044 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58044' 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58044 00:04:27.675 10:32:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58044 00:04:29.060 00:04:29.060 real 0m2.774s 00:04:29.060 user 0m2.713s 00:04:29.060 sys 0m0.479s 00:04:29.060 10:32:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.061 ************************************ 00:04:29.061 END TEST dpdk_mem_utility 00:04:29.061 ************************************ 00:04:29.061 10:32:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.061 10:32:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.061 10:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.061 10:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.061 10:32:54 -- common/autotest_common.sh@10 -- # set +x 00:04:29.061 ************************************ 00:04:29.061 START TEST event 00:04:29.061 ************************************ 00:04:29.061 10:32:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.061 * Looking for test storage... 00:04:29.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.061 10:32:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.061 10:32:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.061 10:32:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.321 10:32:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.321 10:32:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.321 10:32:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.321 10:32:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.321 10:32:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.321 10:32:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.321 10:32:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.321 10:32:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.321 10:32:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.321 10:32:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.321 10:32:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.321 10:32:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.321 10:32:54 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.321 10:32:54 event -- scripts/common.sh@345 -- # : 1 00:04:29.321 10:32:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.321 10:32:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.321 10:32:54 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.321 10:32:54 event -- scripts/common.sh@353 -- # local d=1 00:04:29.321 10:32:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.321 10:32:54 event -- scripts/common.sh@355 -- # echo 1 00:04:29.321 10:32:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.321 10:32:54 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.321 10:32:54 event -- scripts/common.sh@353 -- # local d=2 00:04:29.321 10:32:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.321 10:32:54 event -- scripts/common.sh@355 -- # echo 2 00:04:29.321 10:32:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.321 10:32:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.321 10:32:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.321 10:32:54 event -- scripts/common.sh@368 -- # return 0 00:04:29.321 10:32:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.321 10:32:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.321 --rc genhtml_branch_coverage=1 00:04:29.321 --rc genhtml_function_coverage=1 00:04:29.321 --rc genhtml_legend=1 00:04:29.321 --rc geninfo_all_blocks=1 00:04:29.321 --rc geninfo_unexecuted_blocks=1 00:04:29.321 00:04:29.321 ' 00:04:29.321 10:32:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.321 --rc genhtml_branch_coverage=1 00:04:29.321 --rc genhtml_function_coverage=1 00:04:29.321 --rc genhtml_legend=1 00:04:29.321 --rc geninfo_all_blocks=1 00:04:29.321 --rc geninfo_unexecuted_blocks=1 00:04:29.321 00:04:29.321 ' 00:04:29.321 10:32:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.321 --rc genhtml_branch_coverage=1 00:04:29.321 --rc genhtml_function_coverage=1 00:04:29.321 --rc genhtml_legend=1 00:04:29.321 --rc geninfo_all_blocks=1 00:04:29.321 --rc geninfo_unexecuted_blocks=1 00:04:29.321 00:04:29.321 ' 00:04:29.322 10:32:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.322 --rc genhtml_branch_coverage=1 00:04:29.322 --rc genhtml_function_coverage=1 00:04:29.322 --rc genhtml_legend=1 00:04:29.322 --rc geninfo_all_blocks=1 00:04:29.322 --rc geninfo_unexecuted_blocks=1 00:04:29.322 00:04:29.322 ' 00:04:29.322 10:32:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.322 10:32:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.322 10:32:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.322 10:32:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:29.322 10:32:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.322 10:32:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.322 ************************************ 00:04:29.322 START TEST event_perf 00:04:29.322 ************************************ 00:04:29.322 10:32:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.322 Running I/O for 1 seconds...[2024-11-18 10:32:55.000661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:29.322 [2024-11-18 10:32:55.000776] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58141 ] 00:04:29.322 [2024-11-18 10:32:55.158954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.581 [2024-11-18 10:32:55.247714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.581 [2024-11-18 10:32:55.248098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.581 [2024-11-18 10:32:55.248010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.581 Running I/O for 1 seconds...[2024-11-18 10:32:55.248121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.520 00:04:30.520 lcore 0: 208431 00:04:30.520 lcore 1: 208433 00:04:30.520 lcore 2: 208434 00:04:30.520 lcore 3: 208433 00:04:30.520 done. 00:04:30.520 00:04:30.520 ************************************ 00:04:30.520 END TEST event_perf 00:04:30.520 ************************************ 00:04:30.520 real 0m1.412s 00:04:30.520 user 0m4.194s 00:04:30.520 sys 0m0.101s 00:04:30.520 10:32:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.520 10:32:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.780 10:32:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.780 10:32:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:30.780 10:32:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.780 10:32:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.780 ************************************ 00:04:30.780 START TEST event_reactor 00:04:30.780 ************************************ 00:04:30.780 10:32:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.780 [2024-11-18 10:32:56.464860] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:30.780 [2024-11-18 10:32:56.464969] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:04:30.780 [2024-11-18 10:32:56.620504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.038 [2024-11-18 10:32:56.703297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.974 test_start 00:04:31.974 oneshot 00:04:31.974 tick 100 00:04:31.974 tick 100 00:04:31.974 tick 250 00:04:31.974 tick 100 00:04:31.974 tick 100 00:04:31.974 tick 100 00:04:31.974 tick 250 00:04:31.974 tick 500 00:04:31.974 tick 100 00:04:31.974 tick 100 00:04:31.974 tick 250 00:04:31.974 tick 100 00:04:31.974 tick 100 00:04:31.974 test_end 00:04:31.974 00:04:31.974 real 0m1.391s 00:04:31.974 user 0m1.211s 00:04:31.974 sys 0m0.073s 00:04:31.974 10:32:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.974 ************************************ 00:04:31.974 END TEST event_reactor 00:04:31.974 ************************************ 00:04:31.974 10:32:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:32.232 10:32:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.232 10:32:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:32.232 10:32:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.232 10:32:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.232 ************************************ 00:04:32.232 START TEST event_reactor_perf 00:04:32.232 ************************************ 00:04:32.232 10:32:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.232 [2024-11-18 10:32:57.909240] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:32.232 [2024-11-18 10:32:57.909343] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:04:32.232 [2024-11-18 10:32:58.064423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.490 [2024-11-18 10:32:58.139061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.425 test_start 00:04:33.425 test_end 00:04:33.425 Performance: 418373 events per second 00:04:33.425 ************************************ 00:04:33.425 END TEST event_reactor_perf 00:04:33.425 ************************************ 00:04:33.425 00:04:33.425 real 0m1.377s 00:04:33.425 user 0m1.211s 00:04:33.425 sys 0m0.059s 00:04:33.425 10:32:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.425 10:32:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.690 10:32:59 event -- event/event.sh@49 -- # uname -s 00:04:33.690 10:32:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.690 10:32:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.690 10:32:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.690 10:32:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.690 10:32:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.690 ************************************ 00:04:33.690 START TEST event_scheduler 00:04:33.690 ************************************ 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.690 * Looking for test storage... 00:04:33.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.690 10:32:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.690 --rc genhtml_branch_coverage=1 00:04:33.690 --rc genhtml_function_coverage=1 00:04:33.690 --rc genhtml_legend=1 00:04:33.690 --rc geninfo_all_blocks=1 00:04:33.690 --rc geninfo_unexecuted_blocks=1 00:04:33.690 00:04:33.690 ' 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.690 --rc genhtml_branch_coverage=1 00:04:33.690 --rc genhtml_function_coverage=1 00:04:33.690 --rc genhtml_legend=1 00:04:33.690 --rc geninfo_all_blocks=1 00:04:33.690 --rc geninfo_unexecuted_blocks=1 00:04:33.690 00:04:33.690 ' 00:04:33.690 10:32:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.690 --rc genhtml_branch_coverage=1 00:04:33.690 --rc genhtml_function_coverage=1 00:04:33.690 --rc genhtml_legend=1 00:04:33.690 --rc geninfo_all_blocks=1 00:04:33.690 --rc geninfo_unexecuted_blocks=1 00:04:33.690 00:04:33.691 ' 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.691 --rc genhtml_branch_coverage=1 00:04:33.691 --rc genhtml_function_coverage=1 00:04:33.691 --rc genhtml_legend=1 00:04:33.691 --rc geninfo_all_blocks=1 00:04:33.691 --rc geninfo_unexecuted_blocks=1 00:04:33.691 00:04:33.691 ' 00:04:33.691 10:32:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.691 10:32:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58282 00:04:33.691 10:32:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.691 10:32:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58282 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58282 ']' 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.691 10:32:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.691 10:32:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.691 [2024-11-18 10:32:59.531694] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:33.691 [2024-11-18 10:32:59.531813] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:04:33.950 [2024-11-18 10:32:59.692113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.950 [2024-11-18 10:32:59.793376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.950 [2024-11-18 10:32:59.793647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.950 [2024-11-18 10:32:59.793825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.950 [2024-11-18 10:32:59.793951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:34.518 10:33:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.518 POWER: Cannot set governor of lcore 0 to performance 00:04:34.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.518 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.518 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.518 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:34.518 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.518 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.518 [2024-11-18 10:33:00.375232] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.518 [2024-11-18 10:33:00.375252] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:34.518 [2024-11-18 10:33:00.375261] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.518 [2024-11-18 10:33:00.375276] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.518 [2024-11-18 10:33:00.375284] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.518 [2024-11-18 10:33:00.375292] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.518 10:33:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.518 10:33:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 [2024-11-18 10:33:00.598710] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.776 10:33:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.776 10:33:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.776 10:33:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.776 10:33:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 ************************************ 00:04:34.776 START TEST scheduler_create_thread 00:04:34.776 ************************************ 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 2 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 3 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 4 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 5 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.776 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.033 6 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.033 7 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.033 8 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.033 9 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.033 10 00:04:35.033 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.034 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.629 10:33:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.629 10:33:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:35.629 10:33:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:35.629 10:33:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.629 10:33:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.565 ************************************ 00:04:36.565 END TEST scheduler_create_thread 00:04:36.565 ************************************ 00:04:36.565 10:33:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.565 00:04:36.565 real 0m1.753s 00:04:36.565 user 0m0.019s 00:04:36.565 sys 0m0.001s 00:04:36.565 10:33:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.565 10:33:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.565 10:33:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:36.565 10:33:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58282 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58282 ']' 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58282 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58282 00:04:36.566 killing process with pid 58282 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58282' 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58282 00:04:36.566 10:33:02 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58282 00:04:37.132 [2024-11-18 10:33:02.847293] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:37.697 00:04:37.697 real 0m4.090s 00:04:37.697 user 0m6.773s 00:04:37.697 sys 0m0.347s 00:04:37.697 ************************************ 00:04:37.697 END TEST event_scheduler 00:04:37.697 ************************************ 00:04:37.697 10:33:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.697 10:33:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 10:33:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:37.697 10:33:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:37.697 10:33:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.697 10:33:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.697 10:33:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 ************************************ 00:04:37.697 START TEST app_repeat 00:04:37.697 ************************************ 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:37.697 Process app_repeat pid: 58377 00:04:37.697 spdk_app_start Round 0 00:04:37.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58377 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58377' 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58377 /var/tmp/spdk-nbd.sock 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58377 ']' 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.697 10:33:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.697 10:33:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 [2024-11-18 10:33:03.522441] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:37.697 [2024-11-18 10:33:03.522549] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58377 ] 00:04:37.958 [2024-11-18 10:33:03.678286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.958 [2024-11-18 10:33:03.756050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.958 [2024-11-18 10:33:03.756126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.523 10:33:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.523 10:33:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.523 10:33:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.780 Malloc0 00:04:38.780 10:33:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.038 Malloc1 00:04:39.038 10:33:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.038 10:33:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.296 /dev/nbd0 00:04:39.296 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.296 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.296 1+0 records in 00:04:39.296 1+0 records out 00:04:39.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259378 s, 15.8 MB/s 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.296 10:33:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.296 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.296 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.296 10:33:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.553 /dev/nbd1 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.553 1+0 records in 00:04:39.553 1+0 records out 00:04:39.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020598 s, 19.9 MB/s 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.553 10:33:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.811 { 00:04:39.811 "nbd_device": "/dev/nbd0", 00:04:39.811 "bdev_name": "Malloc0" 00:04:39.811 }, 00:04:39.811 { 00:04:39.811 "nbd_device": "/dev/nbd1", 00:04:39.811 "bdev_name": "Malloc1" 00:04:39.811 } 00:04:39.811 ]' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.811 { 00:04:39.811 "nbd_device": "/dev/nbd0", 00:04:39.811 "bdev_name": "Malloc0" 00:04:39.811 }, 00:04:39.811 { 00:04:39.811 "nbd_device": "/dev/nbd1", 00:04:39.811 "bdev_name": "Malloc1" 00:04:39.811 } 00:04:39.811 ]' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.811 /dev/nbd1' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.811 /dev/nbd1' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.811 256+0 records in 00:04:39.811 256+0 records out 00:04:39.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705534 s, 149 MB/s 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.811 256+0 records in 00:04:39.811 256+0 records out 00:04:39.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130241 s, 80.5 MB/s 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.811 256+0 records in 00:04:39.811 256+0 records out 00:04:39.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162797 s, 64.4 MB/s 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.811 10:33:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.069 10:33:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.327 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.327 10:33:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.327 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.327 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.584 10:33:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.585 10:33:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.585 10:33:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.845 10:33:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.414 [2024-11-18 10:33:07.038460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.414 [2024-11-18 10:33:07.105397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.414 [2024-11-18 10:33:07.105561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.414 [2024-11-18 10:33:07.202123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.414 [2024-11-18 10:33:07.202166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.950 10:33:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.950 spdk_app_start Round 1 00:04:43.950 10:33:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:43.950 10:33:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58377 /var/tmp/spdk-nbd.sock 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58377 ']' 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.950 10:33:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.950 10:33:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.207 Malloc0 00:04:44.207 10:33:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.465 Malloc1 00:04:44.465 10:33:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.465 10:33:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.723 /dev/nbd0 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.723 1+0 records in 00:04:44.723 1+0 records out 00:04:44.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186832 s, 21.9 MB/s 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.723 10:33:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.723 /dev/nbd1 00:04:44.723 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.982 1+0 records in 00:04:44.982 1+0 records out 00:04:44.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200891 s, 20.4 MB/s 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.982 10:33:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.982 { 00:04:44.982 "nbd_device": "/dev/nbd0", 00:04:44.982 "bdev_name": "Malloc0" 00:04:44.982 }, 00:04:44.982 { 00:04:44.982 "nbd_device": "/dev/nbd1", 00:04:44.982 "bdev_name": "Malloc1" 00:04:44.982 } 00:04:44.982 ]' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.982 { 00:04:44.982 "nbd_device": "/dev/nbd0", 00:04:44.982 "bdev_name": "Malloc0" 00:04:44.982 }, 00:04:44.982 { 00:04:44.982 "nbd_device": "/dev/nbd1", 00:04:44.982 "bdev_name": "Malloc1" 00:04:44.982 } 00:04:44.982 ]' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.982 /dev/nbd1' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.982 /dev/nbd1' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.982 10:33:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.241 256+0 records in 00:04:45.241 256+0 records out 00:04:45.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080337 s, 131 MB/s 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.241 256+0 records in 00:04:45.241 256+0 records out 00:04:45.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175039 s, 59.9 MB/s 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.241 256+0 records in 00:04:45.241 256+0 records out 00:04:45.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197151 s, 53.2 MB/s 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.241 10:33:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.501 10:33:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.762 10:33:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.762 10:33:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.023 10:33:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.594 [2024-11-18 10:33:12.388489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.594 [2024-11-18 10:33:12.455918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.594 [2024-11-18 10:33:12.455969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.853 [2024-11-18 10:33:12.550762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.853 [2024-11-18 10:33:12.550819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.420 10:33:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.420 spdk_app_start Round 2 00:04:49.420 10:33:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:49.420 10:33:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58377 /var/tmp/spdk-nbd.sock 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58377 ']' 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.420 10:33:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.420 10:33:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.420 10:33:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:49.420 10:33:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.420 Malloc0 00:04:49.420 10:33:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.677 Malloc1 00:04:49.677 10:33:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.677 10:33:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.936 /dev/nbd0 00:04:49.936 10:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.936 10:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.936 1+0 records in 00:04:49.936 1+0 records out 00:04:49.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155552 s, 26.3 MB/s 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.936 10:33:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.936 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.936 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.936 10:33:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.194 /dev/nbd1 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.194 1+0 records in 00:04:50.194 1+0 records out 00:04:50.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001577 s, 26.0 MB/s 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.194 10:33:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.194 10:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.452 10:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.452 { 00:04:50.452 "nbd_device": "/dev/nbd0", 00:04:50.452 "bdev_name": "Malloc0" 00:04:50.452 }, 00:04:50.452 { 00:04:50.452 "nbd_device": "/dev/nbd1", 00:04:50.452 "bdev_name": "Malloc1" 00:04:50.452 } 00:04:50.452 ]' 00:04:50.452 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.452 { 00:04:50.452 "nbd_device": "/dev/nbd0", 00:04:50.452 "bdev_name": "Malloc0" 00:04:50.452 }, 00:04:50.452 { 00:04:50.452 "nbd_device": "/dev/nbd1", 00:04:50.452 "bdev_name": "Malloc1" 00:04:50.452 } 00:04:50.452 ]' 00:04:50.452 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.452 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.453 /dev/nbd1' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.453 /dev/nbd1' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.453 256+0 records in 00:04:50.453 256+0 records out 00:04:50.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113128 s, 92.7 MB/s 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.453 256+0 records in 00:04:50.453 256+0 records out 00:04:50.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146938 s, 71.4 MB/s 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.453 256+0 records in 00:04:50.453 256+0 records out 00:04:50.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151994 s, 69.0 MB/s 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.453 10:33:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.711 10:33:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.969 10:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.228 10:33:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.228 10:33:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.487 10:33:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.053 [2024-11-18 10:33:17.753974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.053 [2024-11-18 10:33:17.824680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.053 [2024-11-18 10:33:17.824702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.053 [2024-11-18 10:33:17.926248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.053 [2024-11-18 10:33:17.926292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.591 10:33:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58377 /var/tmp/spdk-nbd.sock 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58377 ']' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.591 10:33:20 event.app_repeat -- event/event.sh@39 -- # killprocess 58377 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58377 ']' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58377 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58377 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.591 killing process with pid 58377 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58377' 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58377 00:04:54.591 10:33:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58377 00:04:55.156 spdk_app_start is called in Round 0. 00:04:55.156 Shutdown signal received, stop current app iteration 00:04:55.156 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:55.156 spdk_app_start is called in Round 1. 00:04:55.156 Shutdown signal received, stop current app iteration 00:04:55.156 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:55.156 spdk_app_start is called in Round 2. 00:04:55.156 Shutdown signal received, stop current app iteration 00:04:55.156 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:04:55.156 spdk_app_start is called in Round 3. 00:04:55.156 Shutdown signal received, stop current app iteration 00:04:55.156 10:33:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:55.156 10:33:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:55.156 00:04:55.156 real 0m17.461s 00:04:55.156 user 0m38.302s 00:04:55.156 sys 0m2.024s 00:04:55.156 ************************************ 00:04:55.156 END TEST app_repeat 00:04:55.156 ************************************ 00:04:55.156 10:33:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.156 10:33:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.156 10:33:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:55.156 10:33:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:55.156 10:33:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.156 10:33:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.156 10:33:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.156 ************************************ 00:04:55.156 START TEST cpu_locks 00:04:55.156 ************************************ 00:04:55.156 10:33:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:55.414 * Looking for test storage... 00:04:55.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:55.414 10:33:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.414 10:33:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.414 10:33:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.414 10:33:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:55.414 10:33:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.415 10:33:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.415 10:33:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.415 10:33:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.415 --rc genhtml_branch_coverage=1 00:04:55.415 --rc genhtml_function_coverage=1 00:04:55.415 --rc genhtml_legend=1 00:04:55.415 --rc geninfo_all_blocks=1 00:04:55.415 --rc geninfo_unexecuted_blocks=1 00:04:55.415 00:04:55.415 ' 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.415 --rc genhtml_branch_coverage=1 00:04:55.415 --rc genhtml_function_coverage=1 00:04:55.415 --rc genhtml_legend=1 00:04:55.415 --rc geninfo_all_blocks=1 00:04:55.415 --rc geninfo_unexecuted_blocks=1 00:04:55.415 00:04:55.415 ' 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.415 --rc genhtml_branch_coverage=1 00:04:55.415 --rc genhtml_function_coverage=1 00:04:55.415 --rc genhtml_legend=1 00:04:55.415 --rc geninfo_all_blocks=1 00:04:55.415 --rc geninfo_unexecuted_blocks=1 00:04:55.415 00:04:55.415 ' 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.415 --rc genhtml_branch_coverage=1 00:04:55.415 --rc genhtml_function_coverage=1 00:04:55.415 --rc genhtml_legend=1 00:04:55.415 --rc geninfo_all_blocks=1 00:04:55.415 --rc geninfo_unexecuted_blocks=1 00:04:55.415 00:04:55.415 ' 00:04:55.415 10:33:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:55.415 10:33:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:55.415 10:33:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:55.415 10:33:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.415 10:33:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.415 ************************************ 00:04:55.415 START TEST default_locks 00:04:55.415 ************************************ 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58802 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58802 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58802 ']' 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.415 10:33:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.415 [2024-11-18 10:33:21.232524] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:55.415 [2024-11-18 10:33:21.232638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58802 ] 00:04:55.673 [2024-11-18 10:33:21.387973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.673 [2024-11-18 10:33:21.485930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.238 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.238 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:56.238 10:33:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58802 00:04:56.238 10:33:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.238 10:33:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58802 ']' 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.496 killing process with pid 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58802' 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58802 00:04:56.496 10:33:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58802 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58802 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58802 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58802 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58802 ']' 00:04:57.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.872 ERROR: process (pid: 58802) is no longer running 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58802) - No such process 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.872 ************************************ 00:04:57.872 END TEST default_locks 00:04:57.872 ************************************ 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.872 00:04:57.872 real 0m2.418s 00:04:57.872 user 0m2.415s 00:04:57.872 sys 0m0.422s 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.872 10:33:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 10:33:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.872 10:33:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.872 10:33:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.872 10:33:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 ************************************ 00:04:57.872 START TEST default_locks_via_rpc 00:04:57.872 ************************************ 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58866 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58866 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58866 ']' 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 10:33:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.872 [2024-11-18 10:33:23.702998] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:57.872 [2024-11-18 10:33:23.703118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:04:58.131 [2024-11-18 10:33:23.858311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.131 [2024-11-18 10:33:23.933884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58866 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58866 00:04:58.697 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58866 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58866 ']' 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58866 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58866 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.954 killing process with pid 58866 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58866' 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58866 00:04:58.954 10:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58866 00:05:00.328 00:05:00.328 real 0m2.218s 00:05:00.328 user 0m2.259s 00:05:00.329 sys 0m0.376s 00:05:00.329 10:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.329 ************************************ 00:05:00.329 END TEST default_locks_via_rpc 00:05:00.329 ************************************ 00:05:00.329 10:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.329 10:33:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:00.329 10:33:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.329 10:33:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.329 10:33:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.329 ************************************ 00:05:00.329 START TEST non_locking_app_on_locked_coremask 00:05:00.329 ************************************ 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58918 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58918 /var/tmp/spdk.sock 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58918 ']' 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.329 10:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.329 [2024-11-18 10:33:25.948580] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.329 [2024-11-18 10:33:25.948673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:05:00.329 [2024-11-18 10:33:26.098181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.329 [2024-11-18 10:33:26.174254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.262 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.262 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.262 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58934 00:05:01.262 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58934 /var/tmp/spdk2.sock 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58934 ']' 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.263 10:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.263 [2024-11-18 10:33:26.870020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:01.263 [2024-11-18 10:33:26.870140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58934 ] 00:05:01.263 [2024-11-18 10:33:27.033671] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.263 [2024-11-18 10:33:27.033712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.529 [2024-11-18 10:33:27.186765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.484 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.484 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.484 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58918 00:05:02.484 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58918 00:05:02.484 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58918 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58918 ']' 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58918 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58918 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.743 killing process with pid 58918 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58918' 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58918 00:05:02.743 10:33:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58918 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58934 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58934 ']' 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58934 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58934 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.288 killing process with pid 58934 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58934' 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58934 00:05:05.288 10:33:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58934 00:05:06.226 00:05:06.226 real 0m6.058s 00:05:06.226 user 0m6.341s 00:05:06.226 sys 0m0.797s 00:05:06.226 10:33:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.226 ************************************ 00:05:06.226 END TEST non_locking_app_on_locked_coremask 00:05:06.226 ************************************ 00:05:06.226 10:33:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.226 10:33:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.226 10:33:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.226 10:33:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.226 10:33:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.226 ************************************ 00:05:06.226 START TEST locking_app_on_unlocked_coremask 00:05:06.226 ************************************ 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59025 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59025 /var/tmp/spdk.sock 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59025 ']' 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.226 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.226 [2024-11-18 10:33:32.077890] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:06.226 [2024-11-18 10:33:32.078013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:05:06.484 [2024-11-18 10:33:32.233804] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.484 [2024-11-18 10:33:32.233841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.484 [2024-11-18 10:33:32.309221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59041 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59041 /var/tmp/spdk2.sock 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59041 ']' 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.049 10:33:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.307 [2024-11-18 10:33:32.967136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:07.307 [2024-11-18 10:33:32.967267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:05:07.307 [2024-11-18 10:33:33.131716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.565 [2024-11-18 10:33:33.283425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.503 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.503 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.503 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59041 00:05:08.503 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.503 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59041 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59025 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59025 ']' 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59025 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59025 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.762 killing process with pid 59025 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59025' 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59025 00:05:08.762 10:33:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59025 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59041 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59041 ']' 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59041 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59041 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.291 killing process with pid 59041 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59041' 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59041 00:05:11.291 10:33:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59041 00:05:12.225 00:05:12.225 real 0m6.051s 00:05:12.225 user 0m6.312s 00:05:12.225 sys 0m0.799s 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.225 ************************************ 00:05:12.225 END TEST locking_app_on_unlocked_coremask 00:05:12.225 ************************************ 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.225 10:33:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:12.225 10:33:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.225 10:33:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.225 10:33:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.225 ************************************ 00:05:12.225 START TEST locking_app_on_locked_coremask 00:05:12.225 ************************************ 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59132 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59132 /var/tmp/spdk.sock 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59132 ']' 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.225 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.483 [2024-11-18 10:33:38.165952] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:12.483 [2024-11-18 10:33:38.166046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:05:12.483 [2024-11-18 10:33:38.312688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.741 [2024-11-18 10:33:38.388795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.308 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.308 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.308 10:33:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59143 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59143 /var/tmp/spdk2.sock 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59143 /var/tmp/spdk2.sock 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59143 /var/tmp/spdk2.sock 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59143 ']' 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.308 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.308 [2024-11-18 10:33:39.077577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:13.308 [2024-11-18 10:33:39.077689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:05:13.565 [2024-11-18 10:33:39.239694] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59132 has claimed it. 00:05:13.565 [2024-11-18 10:33:39.239742] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59143) - No such process 00:05:13.823 ERROR: process (pid: 59143) is no longer running 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.823 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59132 00:05:14.097 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59132 00:05:14.097 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59132 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59132 ']' 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59132 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.370 10:33:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59132 00:05:14.370 10:33:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.370 10:33:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.370 killing process with pid 59132 00:05:14.370 10:33:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59132' 00:05:14.370 10:33:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59132 00:05:14.370 10:33:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59132 00:05:15.305 00:05:15.305 real 0m3.050s 00:05:15.305 user 0m3.302s 00:05:15.305 sys 0m0.540s 00:05:15.305 10:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.305 ************************************ 00:05:15.305 END TEST locking_app_on_locked_coremask 00:05:15.305 ************************************ 00:05:15.305 10:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.563 10:33:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:15.563 10:33:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.563 10:33:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.563 10:33:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.563 ************************************ 00:05:15.563 START TEST locking_overlapped_coremask 00:05:15.563 ************************************ 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59201 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59201 /var/tmp/spdk.sock 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59201 ']' 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.563 10:33:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.563 [2024-11-18 10:33:41.283396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:15.563 [2024-11-18 10:33:41.283512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:05:15.563 [2024-11-18 10:33:41.432979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.821 [2024-11-18 10:33:41.510095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.821 [2024-11-18 10:33:41.510625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.821 [2024-11-18 10:33:41.510647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59214 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59214 /var/tmp/spdk2.sock 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59214 /var/tmp/spdk2.sock 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59214 /var/tmp/spdk2.sock 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.388 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.388 [2024-11-18 10:33:42.140172] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:16.388 [2024-11-18 10:33:42.140696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:05:16.646 [2024-11-18 10:33:42.319429] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59201 has claimed it. 00:05:16.646 [2024-11-18 10:33:42.319480] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59214) - No such process 00:05:16.904 ERROR: process (pid: 59214) is no longer running 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59201 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59201 ']' 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59201 00:05:16.904 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59201 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.162 killing process with pid 59201 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59201' 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59201 00:05:17.162 10:33:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59201 00:05:18.095 00:05:18.095 real 0m2.761s 00:05:18.095 user 0m7.547s 00:05:18.095 sys 0m0.392s 00:05:18.095 10:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.095 ************************************ 00:05:18.095 END TEST locking_overlapped_coremask 00:05:18.095 ************************************ 00:05:18.095 10:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.353 10:33:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:18.353 10:33:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.353 10:33:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.353 10:33:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.353 ************************************ 00:05:18.353 START TEST locking_overlapped_coremask_via_rpc 00:05:18.353 ************************************ 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59267 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59267 /var/tmp/spdk.sock 00:05:18.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.353 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.354 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:18.354 [2024-11-18 10:33:44.082026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:18.354 [2024-11-18 10:33:44.082116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:05:18.354 [2024-11-18 10:33:44.230918] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.354 [2024-11-18 10:33:44.230949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.612 [2024-11-18 10:33:44.309271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.612 [2024-11-18 10:33:44.309461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.612 [2024-11-18 10:33:44.309530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59280 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59280 /var/tmp/spdk2.sock 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.177 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.178 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.178 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.178 10:33:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.178 [2024-11-18 10:33:44.993933] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:19.178 [2024-11-18 10:33:44.994050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:05:19.436 [2024-11-18 10:33:45.156600] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.436 [2024-11-18 10:33:45.156636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.694 [2024-11-18 10:33:45.316592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.694 [2024-11-18 10:33:45.320372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.694 [2024-11-18 10:33:45.320390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.635 [2024-11-18 10:33:46.238305] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59267 has claimed it. 00:05:20.635 request: 00:05:20.635 { 00:05:20.635 "method": "framework_enable_cpumask_locks", 00:05:20.635 "req_id": 1 00:05:20.635 } 00:05:20.635 Got JSON-RPC error response 00:05:20.635 response: 00:05:20.635 { 00:05:20.635 "code": -32603, 00:05:20.635 "message": "Failed to claim CPU core: 2" 00:05:20.635 } 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59267 /var/tmp/spdk.sock 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59280 /var/tmp/spdk2.sock 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.635 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:20.896 00:05:20.896 real 0m2.645s 00:05:20.896 user 0m1.058s 00:05:20.896 sys 0m0.132s 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.896 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.896 ************************************ 00:05:20.896 END TEST locking_overlapped_coremask_via_rpc 00:05:20.896 ************************************ 00:05:20.896 10:33:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:20.896 10:33:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59267 ]] 00:05:20.896 10:33:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59267 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59267 ']' 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59267 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59267 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59267' 00:05:20.896 killing process with pid 59267 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59267 00:05:20.896 10:33:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59267 00:05:22.283 10:33:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59280 ]] 00:05:22.283 10:33:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59280 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59280 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59280 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59280' 00:05:22.283 killing process with pid 59280 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59280 00:05:22.283 10:33:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59280 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59267 ]] 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59267 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59267 ']' 00:05:23.227 Process with pid 59267 is not found 00:05:23.227 Process with pid 59280 is not found 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59267 00:05:23.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59267) - No such process 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59267 is not found' 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59280 ]] 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59280 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59280 00:05:23.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59280) - No such process 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59280 is not found' 00:05:23.227 10:33:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:23.227 ************************************ 00:05:23.227 END TEST cpu_locks 00:05:23.227 ************************************ 00:05:23.227 00:05:23.227 real 0m28.075s 00:05:23.227 user 0m48.122s 00:05:23.227 sys 0m4.174s 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.227 10:33:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.227 00:05:23.227 real 0m54.288s 00:05:23.227 user 1m39.989s 00:05:23.227 sys 0m7.002s 00:05:23.227 10:33:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.492 ************************************ 00:05:23.492 END TEST event 00:05:23.492 10:33:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 ************************************ 00:05:23.492 10:33:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:23.492 10:33:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.492 10:33:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.492 10:33:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 ************************************ 00:05:23.492 START TEST thread 00:05:23.492 ************************************ 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:23.492 * Looking for test storage... 00:05:23.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.492 10:33:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.492 10:33:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.492 10:33:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.492 10:33:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.492 10:33:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.492 10:33:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.492 10:33:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.492 10:33:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.492 10:33:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.492 10:33:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.492 10:33:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.492 10:33:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:23.492 10:33:49 thread -- scripts/common.sh@345 -- # : 1 00:05:23.492 10:33:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.492 10:33:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.492 10:33:49 thread -- scripts/common.sh@365 -- # decimal 1 00:05:23.492 10:33:49 thread -- scripts/common.sh@353 -- # local d=1 00:05:23.492 10:33:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.492 10:33:49 thread -- scripts/common.sh@355 -- # echo 1 00:05:23.492 10:33:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.492 10:33:49 thread -- scripts/common.sh@366 -- # decimal 2 00:05:23.492 10:33:49 thread -- scripts/common.sh@353 -- # local d=2 00:05:23.492 10:33:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.492 10:33:49 thread -- scripts/common.sh@355 -- # echo 2 00:05:23.492 10:33:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.492 10:33:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.492 10:33:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.492 10:33:49 thread -- scripts/common.sh@368 -- # return 0 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.492 --rc genhtml_branch_coverage=1 00:05:23.492 --rc genhtml_function_coverage=1 00:05:23.492 --rc genhtml_legend=1 00:05:23.492 --rc geninfo_all_blocks=1 00:05:23.492 --rc geninfo_unexecuted_blocks=1 00:05:23.492 00:05:23.492 ' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.492 --rc genhtml_branch_coverage=1 00:05:23.492 --rc genhtml_function_coverage=1 00:05:23.492 --rc genhtml_legend=1 00:05:23.492 --rc geninfo_all_blocks=1 00:05:23.492 --rc geninfo_unexecuted_blocks=1 00:05:23.492 00:05:23.492 ' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.492 --rc genhtml_branch_coverage=1 00:05:23.492 --rc genhtml_function_coverage=1 00:05:23.492 --rc genhtml_legend=1 00:05:23.492 --rc geninfo_all_blocks=1 00:05:23.492 --rc geninfo_unexecuted_blocks=1 00:05:23.492 00:05:23.492 ' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.492 --rc genhtml_branch_coverage=1 00:05:23.492 --rc genhtml_function_coverage=1 00:05:23.492 --rc genhtml_legend=1 00:05:23.492 --rc geninfo_all_blocks=1 00:05:23.492 --rc geninfo_unexecuted_blocks=1 00:05:23.492 00:05:23.492 ' 00:05:23.492 10:33:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.492 10:33:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.492 ************************************ 00:05:23.492 START TEST thread_poller_perf 00:05:23.492 ************************************ 00:05:23.492 10:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:23.492 [2024-11-18 10:33:49.312977] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:23.492 [2024-11-18 10:33:49.313084] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:05:23.758 [2024-11-18 10:33:49.469002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:23.758 [2024-11-18 10:33:49.547057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.143 [2024-11-18T10:33:51.027Z] ====================================== 00:05:25.143 [2024-11-18T10:33:51.027Z] busy:2609474000 (cyc) 00:05:25.143 [2024-11-18T10:33:51.027Z] total_run_count: 401000 00:05:25.143 [2024-11-18T10:33:51.027Z] tsc_hz: 2600000000 (cyc) 00:05:25.143 [2024-11-18T10:33:51.027Z] ====================================== 00:05:25.143 [2024-11-18T10:33:51.027Z] poller_cost: 6507 (cyc), 2502 (nsec) 00:05:25.143 00:05:25.143 real 0m1.389s 00:05:25.143 user 0m1.214s 00:05:25.143 sys 0m0.067s 00:05:25.143 10:33:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.143 10:33:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.143 ************************************ 00:05:25.143 END TEST thread_poller_perf 00:05:25.143 ************************************ 00:05:25.143 10:33:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.143 10:33:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:25.143 10:33:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.143 10:33:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.143 ************************************ 00:05:25.143 START TEST thread_poller_perf 00:05:25.143 ************************************ 00:05:25.143 10:33:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:25.143 [2024-11-18 10:33:50.759115] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:25.143 [2024-11-18 10:33:50.759215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59470 ] 00:05:25.143 [2024-11-18 10:33:50.909217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.143 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:25.143 [2024-11-18 10:33:50.986110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.537 [2024-11-18T10:33:52.421Z] ====================================== 00:05:26.537 [2024-11-18T10:33:52.421Z] busy:2602325704 (cyc) 00:05:26.537 [2024-11-18T10:33:52.421Z] total_run_count: 5281000 00:05:26.537 [2024-11-18T10:33:52.421Z] tsc_hz: 2600000000 (cyc) 00:05:26.537 [2024-11-18T10:33:52.421Z] ====================================== 00:05:26.537 [2024-11-18T10:33:52.421Z] poller_cost: 492 (cyc), 189 (nsec) 00:05:26.537 00:05:26.537 real 0m1.377s 00:05:26.537 user 0m1.212s 00:05:26.537 sys 0m0.058s 00:05:26.537 10:33:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.537 ************************************ 00:05:26.537 END TEST thread_poller_perf 00:05:26.537 ************************************ 00:05:26.537 10:33:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.537 10:33:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:26.537 00:05:26.537 real 0m3.002s 00:05:26.537 user 0m2.526s 00:05:26.537 sys 0m0.238s 00:05:26.537 ************************************ 00:05:26.537 END TEST thread 00:05:26.537 10:33:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.537 10:33:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.537 ************************************ 00:05:26.537 10:33:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:26.537 10:33:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:26.537 10:33:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.537 10:33:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.537 10:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.537 ************************************ 00:05:26.537 START TEST app_cmdline 00:05:26.537 ************************************ 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:26.537 * Looking for test storage... 00:05:26.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.537 10:33:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.537 --rc genhtml_branch_coverage=1 00:05:26.537 --rc genhtml_function_coverage=1 00:05:26.537 --rc genhtml_legend=1 00:05:26.537 --rc geninfo_all_blocks=1 00:05:26.537 --rc geninfo_unexecuted_blocks=1 00:05:26.537 00:05:26.537 ' 00:05:26.537 10:33:52 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.537 --rc genhtml_branch_coverage=1 00:05:26.537 --rc genhtml_function_coverage=1 00:05:26.537 --rc genhtml_legend=1 00:05:26.537 --rc geninfo_all_blocks=1 00:05:26.537 --rc geninfo_unexecuted_blocks=1 00:05:26.538 00:05:26.538 ' 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.538 --rc genhtml_branch_coverage=1 00:05:26.538 --rc genhtml_function_coverage=1 00:05:26.538 --rc genhtml_legend=1 00:05:26.538 --rc geninfo_all_blocks=1 00:05:26.538 --rc geninfo_unexecuted_blocks=1 00:05:26.538 00:05:26.538 ' 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.538 --rc genhtml_branch_coverage=1 00:05:26.538 --rc genhtml_function_coverage=1 00:05:26.538 --rc genhtml_legend=1 00:05:26.538 --rc geninfo_all_blocks=1 00:05:26.538 --rc geninfo_unexecuted_blocks=1 00:05:26.538 00:05:26.538 ' 00:05:26.538 10:33:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:26.538 10:33:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59554 00:05:26.538 10:33:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59554 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59554 ']' 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.538 10:33:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.538 10:33:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.538 [2024-11-18 10:33:52.415309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.538 [2024-11-18 10:33:52.415435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59554 ] 00:05:26.797 [2024-11-18 10:33:52.573179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.797 [2024-11-18 10:33:52.652281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.362 10:33:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.362 10:33:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:27.362 10:33:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:27.621 { 00:05:27.621 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:05:27.621 "fields": { 00:05:27.621 "major": 25, 00:05:27.621 "minor": 1, 00:05:27.621 "patch": 0, 00:05:27.621 "suffix": "-pre", 00:05:27.621 "commit": "83e8405e4" 00:05:27.621 } 00:05:27.621 } 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:27.621 10:33:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:27.621 10:33:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:27.880 request: 00:05:27.880 { 00:05:27.880 "method": "env_dpdk_get_mem_stats", 00:05:27.880 "req_id": 1 00:05:27.880 } 00:05:27.880 Got JSON-RPC error response 00:05:27.880 response: 00:05:27.880 { 00:05:27.880 "code": -32601, 00:05:27.880 "message": "Method not found" 00:05:27.880 } 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.880 10:33:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59554 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59554 ']' 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59554 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59554 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.880 killing process with pid 59554 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59554' 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59554 00:05:27.880 10:33:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59554 00:05:29.256 00:05:29.256 real 0m2.872s 00:05:29.256 user 0m3.203s 00:05:29.256 sys 0m0.392s 00:05:29.256 10:33:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.256 10:33:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 ************************************ 00:05:29.256 END TEST app_cmdline 00:05:29.256 ************************************ 00:05:29.256 10:33:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:29.256 10:33:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.256 10:33:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.256 10:33:55 -- common/autotest_common.sh@10 -- # set +x 00:05:29.256 ************************************ 00:05:29.256 START TEST version 00:05:29.256 ************************************ 00:05:29.256 10:33:55 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:29.515 * Looking for test storage... 00:05:29.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.515 10:33:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.515 10:33:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.515 10:33:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.515 10:33:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.515 10:33:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.515 10:33:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.515 10:33:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.515 10:33:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.515 10:33:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.515 10:33:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.515 10:33:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.515 10:33:55 version -- scripts/common.sh@344 -- # case "$op" in 00:05:29.515 10:33:55 version -- scripts/common.sh@345 -- # : 1 00:05:29.515 10:33:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.515 10:33:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.515 10:33:55 version -- scripts/common.sh@365 -- # decimal 1 00:05:29.515 10:33:55 version -- scripts/common.sh@353 -- # local d=1 00:05:29.515 10:33:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.515 10:33:55 version -- scripts/common.sh@355 -- # echo 1 00:05:29.515 10:33:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.515 10:33:55 version -- scripts/common.sh@366 -- # decimal 2 00:05:29.515 10:33:55 version -- scripts/common.sh@353 -- # local d=2 00:05:29.515 10:33:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.515 10:33:55 version -- scripts/common.sh@355 -- # echo 2 00:05:29.515 10:33:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.515 10:33:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.515 10:33:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.515 10:33:55 version -- scripts/common.sh@368 -- # return 0 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.515 --rc genhtml_branch_coverage=1 00:05:29.515 --rc genhtml_function_coverage=1 00:05:29.515 --rc genhtml_legend=1 00:05:29.515 --rc geninfo_all_blocks=1 00:05:29.515 --rc geninfo_unexecuted_blocks=1 00:05:29.515 00:05:29.515 ' 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.515 --rc genhtml_branch_coverage=1 00:05:29.515 --rc genhtml_function_coverage=1 00:05:29.515 --rc genhtml_legend=1 00:05:29.515 --rc geninfo_all_blocks=1 00:05:29.515 --rc geninfo_unexecuted_blocks=1 00:05:29.515 00:05:29.515 ' 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.515 --rc genhtml_branch_coverage=1 00:05:29.515 --rc genhtml_function_coverage=1 00:05:29.515 --rc genhtml_legend=1 00:05:29.515 --rc geninfo_all_blocks=1 00:05:29.515 --rc geninfo_unexecuted_blocks=1 00:05:29.515 00:05:29.515 ' 00:05:29.515 10:33:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.515 --rc genhtml_branch_coverage=1 00:05:29.515 --rc genhtml_function_coverage=1 00:05:29.515 --rc genhtml_legend=1 00:05:29.515 --rc geninfo_all_blocks=1 00:05:29.515 --rc geninfo_unexecuted_blocks=1 00:05:29.515 00:05:29.515 ' 00:05:29.515 10:33:55 version -- app/version.sh@17 -- # get_header_version major 00:05:29.515 10:33:55 version -- app/version.sh@14 -- # cut -f2 00:05:29.515 10:33:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.515 10:33:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.515 10:33:55 version -- app/version.sh@17 -- # major=25 00:05:29.515 10:33:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:29.515 10:33:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.515 10:33:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.515 10:33:55 version -- app/version.sh@14 -- # cut -f2 00:05:29.515 10:33:55 version -- app/version.sh@18 -- # minor=1 00:05:29.516 10:33:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:29.516 10:33:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.516 10:33:55 version -- app/version.sh@14 -- # cut -f2 00:05:29.516 10:33:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.516 10:33:55 version -- app/version.sh@19 -- # patch=0 00:05:29.516 10:33:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:29.516 10:33:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:29.516 10:33:55 version -- app/version.sh@14 -- # cut -f2 00:05:29.516 10:33:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:29.516 10:33:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:29.516 10:33:55 version -- app/version.sh@22 -- # version=25.1 00:05:29.516 10:33:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:29.516 10:33:55 version -- app/version.sh@28 -- # version=25.1rc0 00:05:29.516 10:33:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.516 10:33:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:29.516 10:33:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:29.516 10:33:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:29.516 ************************************ 00:05:29.516 END TEST version 00:05:29.516 ************************************ 00:05:29.516 00:05:29.516 real 0m0.177s 00:05:29.516 user 0m0.122s 00:05:29.516 sys 0m0.084s 00:05:29.516 10:33:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.516 10:33:55 version -- common/autotest_common.sh@10 -- # set +x 00:05:29.516 10:33:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:29.516 10:33:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:29.516 10:33:55 -- spdk/autotest.sh@194 -- # uname -s 00:05:29.516 10:33:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:29.516 10:33:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.516 10:33:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:29.516 10:33:55 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:29.516 10:33:55 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:29.516 10:33:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:29.516 10:33:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.516 10:33:55 -- common/autotest_common.sh@10 -- # set +x 00:05:29.516 ************************************ 00:05:29.516 START TEST blockdev_nvme 00:05:29.516 ************************************ 00:05:29.516 10:33:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:29.516 * Looking for test storage... 00:05:29.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.774 10:33:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.774 --rc genhtml_branch_coverage=1 00:05:29.774 --rc genhtml_function_coverage=1 00:05:29.774 --rc genhtml_legend=1 00:05:29.774 --rc geninfo_all_blocks=1 00:05:29.774 --rc geninfo_unexecuted_blocks=1 00:05:29.774 00:05:29.774 ' 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.774 --rc genhtml_branch_coverage=1 00:05:29.774 --rc genhtml_function_coverage=1 00:05:29.774 --rc genhtml_legend=1 00:05:29.774 --rc geninfo_all_blocks=1 00:05:29.774 --rc geninfo_unexecuted_blocks=1 00:05:29.774 00:05:29.774 ' 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.774 --rc genhtml_branch_coverage=1 00:05:29.774 --rc genhtml_function_coverage=1 00:05:29.774 --rc genhtml_legend=1 00:05:29.774 --rc geninfo_all_blocks=1 00:05:29.774 --rc geninfo_unexecuted_blocks=1 00:05:29.774 00:05:29.774 ' 00:05:29.774 10:33:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.774 --rc genhtml_branch_coverage=1 00:05:29.774 --rc genhtml_function_coverage=1 00:05:29.774 --rc genhtml_legend=1 00:05:29.774 --rc geninfo_all_blocks=1 00:05:29.774 --rc geninfo_unexecuted_blocks=1 00:05:29.774 00:05:29.774 ' 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.774 10:33:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:29.774 10:33:55 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59726 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59726 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59726 ']' 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.775 10:33:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.775 10:33:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:29.775 [2024-11-18 10:33:55.566924] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:29.775 [2024-11-18 10:33:55.567241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:05:30.033 [2024-11-18 10:33:55.722044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.033 [2024-11-18 10:33:55.798787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.599 10:33:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.599 10:33:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.599 10:33:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:30.599 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.599 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.857 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.857 10:33:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:30.857 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.857 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.857 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.857 10:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:30.857 10:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:30.857 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.858 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.858 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.858 10:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:30.858 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.858 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:31.116 10:33:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:31.116 10:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:31.117 10:33:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eedd4bd5-124a-424b-92d1-8a36a12f7251"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eedd4bd5-124a-424b-92d1-8a36a12f7251",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "71dfcd83-7779-4354-ab13-839e9e08031b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "71dfcd83-7779-4354-ab13-839e9e08031b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f6b36ef9-ff23-4ab0-a183-fabb3a8744de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f6b36ef9-ff23-4ab0-a183-fabb3a8744de",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ed186335-fd5e-4855-8eca-f3fba7f4a302"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed186335-fd5e-4855-8eca-f3fba7f4a302",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ee707e64-1754-4023-94ed-7ea4fb6ede62"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee707e64-1754-4023-94ed-7ea4fb6ede62",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2146c318-f741-45b6-956d-1a14d6e4c860"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2146c318-f741-45b6-956d-1a14d6e4c860",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:31.117 10:33:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:31.117 10:33:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:31.117 10:33:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:31.117 10:33:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59726 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59726 ']' 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59726 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59726 00:05:31.117 killing process with pid 59726 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59726' 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59726 00:05:31.117 10:33:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59726 00:05:32.495 10:33:58 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:32.495 10:33:58 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:32.495 10:33:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:32.495 10:33:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.495 10:33:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:32.495 ************************************ 00:05:32.495 START TEST bdev_hello_world 00:05:32.495 ************************************ 00:05:32.495 10:33:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:32.495 [2024-11-18 10:33:58.110764] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:32.495 [2024-11-18 10:33:58.110884] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59804 ] 00:05:32.495 [2024-11-18 10:33:58.265201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.495 [2024-11-18 10:33:58.344987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.067 [2024-11-18 10:33:58.833647] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:33.067 [2024-11-18 10:33:58.833686] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:33.067 [2024-11-18 10:33:58.833700] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:33.067 [2024-11-18 10:33:58.835593] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:33.067 [2024-11-18 10:33:58.835909] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:33.067 [2024-11-18 10:33:58.835927] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:33.067 [2024-11-18 10:33:58.836225] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:33.067 00:05:33.067 [2024-11-18 10:33:58.836245] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:33.637 00:05:33.637 real 0m1.329s 00:05:33.637 user 0m1.068s 00:05:33.637 sys 0m0.158s 00:05:33.637 10:33:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.637 ************************************ 00:05:33.637 END TEST bdev_hello_world 00:05:33.637 ************************************ 00:05:33.637 10:33:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:33.637 10:33:59 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:33.637 10:33:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:33.637 10:33:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.637 10:33:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:33.637 ************************************ 00:05:33.637 START TEST bdev_bounds 00:05:33.637 ************************************ 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59841 00:05:33.637 Process bdevio pid: 59841 00:05:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59841' 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59841 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59841 ']' 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.637 10:33:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:33.637 [2024-11-18 10:33:59.488411] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:33.637 [2024-11-18 10:33:59.488501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59841 ] 00:05:33.896 [2024-11-18 10:33:59.638773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.896 [2024-11-18 10:33:59.717710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.896 [2024-11-18 10:33:59.717881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.896 [2024-11-18 10:33:59.717936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.465 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.465 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:34.465 10:34:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:34.725 I/O targets: 00:05:34.725 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:34.725 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:34.725 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.725 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.725 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:34.725 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:34.725 00:05:34.725 00:05:34.725 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.725 http://cunit.sourceforge.net/ 00:05:34.725 00:05:34.725 00:05:34.725 Suite: bdevio tests on: Nvme3n1 00:05:34.725 Test: blockdev write read block ...passed 00:05:34.725 Test: blockdev write zeroes read block ...passed 00:05:34.725 Test: blockdev write zeroes read no split ...passed 00:05:34.725 Test: blockdev write zeroes read split ...passed 00:05:34.725 Test: blockdev write zeroes read split partial ...passed 00:05:34.725 Test: blockdev reset ...[2024-11-18 10:34:00.495036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:34.725 [2024-11-18 10:34:00.499412] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:05:34.725 Test: blockdev write read 8 blocks ...uccessful. 00:05:34.725 passed 00:05:34.725 Test: blockdev write read size > 128k ...passed 00:05:34.725 Test: blockdev write read invalid size ...passed 00:05:34.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.725 Test: blockdev write read max offset ...passed 00:05:34.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.725 Test: blockdev writev readv 8 blocks ...passed 00:05:34.725 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.725 Test: blockdev writev readv block ...passed 00:05:34.725 Test: blockdev writev readv size > 128k ...passed 00:05:34.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.725 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.522236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:34.725 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b9a0a000 len:0x1000 00:05:34.725 [2024-11-18 10:34:00.522627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.725 passed 00:05:34.725 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.725 Test: blockdev nvme admin passthru ...[2024-11-18 10:34:00.524953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:34.725 [2024-11-18 10:34:00.525050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.725 passed 00:05:34.725 Test: blockdev copy ...passed 00:05:34.725 Suite: bdevio tests on: Nvme2n3 00:05:34.725 Test: blockdev write read block ...passed 00:05:34.725 Test: blockdev write zeroes read block ...passed 00:05:34.726 Test: blockdev write zeroes read no split ...passed 00:05:34.726 Test: blockdev write zeroes read split ...passed 00:05:34.726 Test: blockdev write zeroes read split partial ...passed 00:05:34.726 Test: blockdev reset ...[2024-11-18 10:34:00.593030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:34.726 [2024-11-18 10:34:00.600034] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:34.726 Test: blockdev write read 8 blocks ...uccessful. 00:05:34.726 passed 00:05:34.726 Test: blockdev write read size > 128k ...passed 00:05:34.726 Test: blockdev write read invalid size ...passed 00:05:34.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.726 Test: blockdev write read max offset ...passed 00:05:34.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.985 Test: blockdev writev readv 8 blocks ...passed 00:05:34.985 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.985 Test: blockdev writev readv block ...passed 00:05:34.985 Test: blockdev writev readv size > 128k ...passed 00:05:34.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.985 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.617523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29cc06000 len:0x1000 00:05:34.985 [2024-11-18 10:34:00.617712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.985 passed 00:05:34.985 Test: blockdev nvme passthru rw ...passed 00:05:34.985 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:00.619738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:34.985 [2024-11-18 10:34:00.619778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.985 passed 00:05:34.985 Test: blockdev nvme admin passthru ...passed 00:05:34.985 Test: blockdev copy ...passed 00:05:34.985 Suite: bdevio tests on: Nvme2n2 00:05:34.985 Test: blockdev write read block ...passed 00:05:34.985 Test: blockdev write zeroes read block ...passed 00:05:34.985 Test: blockdev write zeroes read no split ...passed 00:05:34.985 Test: blockdev write zeroes read split ...passed 00:05:34.985 Test: blockdev write zeroes read split partial ...passed 00:05:34.985 Test: blockdev reset ...[2024-11-18 10:34:00.680156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:34.985 [2024-11-18 10:34:00.684095] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:05:34.985 00:05:34.985 Test: blockdev write read 8 blocks ...passed 00:05:34.985 Test: blockdev write read size > 128k ...passed 00:05:34.985 Test: blockdev write read invalid size ...passed 00:05:34.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.985 Test: blockdev write read max offset ...passed 00:05:34.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.985 Test: blockdev writev readv 8 blocks ...passed 00:05:34.985 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.985 Test: blockdev writev readv block ...passed 00:05:34.985 Test: blockdev writev readv size > 128k ...passed 00:05:34.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.985 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.695547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d523c000 len:0x1000 00:05:34.985 [2024-11-18 10:34:00.695916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.985 passed 00:05:34.985 Test: blockdev nvme passthru rw ...passed 00:05:34.985 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:00.697833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:34.985 [2024-11-18 10:34:00.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.985 passed 00:05:34.985 Test: blockdev nvme admin passthru ...passed 00:05:34.985 Test: blockdev copy ...passed 00:05:34.985 Suite: bdevio tests on: Nvme2n1 00:05:34.985 Test: blockdev write read block ...passed 00:05:34.985 Test: blockdev write zeroes read block ...passed 00:05:34.985 Test: blockdev write zeroes read no split ...passed 00:05:34.985 Test: blockdev write zeroes read split ...passed 00:05:34.985 Test: blockdev write zeroes read split partial ...passed 00:05:34.985 Test: blockdev reset ...[2024-11-18 10:34:00.753085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:34.985 [2024-11-18 10:34:00.756998] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:34.985 Test: blockdev write read 8 blocks ...uccessful. 00:05:34.985 passed 00:05:34.985 Test: blockdev write read size > 128k ...passed 00:05:34.985 Test: blockdev write read invalid size ...passed 00:05:34.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.985 Test: blockdev write read max offset ...passed 00:05:34.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.985 Test: blockdev writev readv 8 blocks ...passed 00:05:34.985 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.985 Test: blockdev writev readv block ...passed 00:05:34.985 Test: blockdev writev readv size > 128k ...passed 00:05:34.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.986 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.777623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5238000 len:0x1000 00:05:34.986 [2024-11-18 10:34:00.777874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.986 passed 00:05:34.986 Test: blockdev nvme passthru rw ...passed 00:05:34.986 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:00.780516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:34.986 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:34.986 [2024-11-18 10:34:00.780740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.986 passed 00:05:34.986 Test: blockdev copy ...passed 00:05:34.986 Suite: bdevio tests on: Nvme1n1 00:05:34.986 Test: blockdev write read block ...passed 00:05:34.986 Test: blockdev write zeroes read block ...passed 00:05:34.986 Test: blockdev write zeroes read no split ...passed 00:05:34.986 Test: blockdev write zeroes read split ...passed 00:05:34.986 Test: blockdev write zeroes read split partial ...passed 00:05:34.986 Test: blockdev reset ...[2024-11-18 10:34:00.839239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:34.986 [2024-11-18 10:34:00.843994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:34.986 passed 00:05:34.986 Test: blockdev write read 8 blocks ...passed 00:05:34.986 Test: blockdev write read size > 128k ...passed 00:05:34.986 Test: blockdev write read invalid size ...passed 00:05:34.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.986 Test: blockdev write read max offset ...passed 00:05:34.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.986 Test: blockdev writev readv 8 blocks ...passed 00:05:34.986 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.986 Test: blockdev writev readv block ...passed 00:05:34.986 Test: blockdev writev readv size > 128k ...passed 00:05:34.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.986 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.859332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:34.986 Test: blockdev nvme passthru rw ...passed 00:05:34.986 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2d5234000 len:0x1000 00:05:34.986 [2024-11-18 10:34:00.859541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:34.986 [2024-11-18 10:34:00.860139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:34.986 [2024-11-18 10:34:00.860337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:34.986 passed 00:05:35.245 Test: blockdev nvme admin passthru ...passed 00:05:35.245 Test: blockdev copy ...passed 00:05:35.245 Suite: bdevio tests on: Nvme0n1 00:05:35.245 Test: blockdev write read block ...passed 00:05:35.245 Test: blockdev write zeroes read block ...passed 00:05:35.245 Test: blockdev write zeroes read no split ...passed 00:05:35.245 Test: blockdev write zeroes read split ...passed 00:05:35.245 Test: blockdev write zeroes read split partial ...passed 00:05:35.245 Test: blockdev reset ...[2024-11-18 10:34:00.919427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:35.245 [2024-11-18 10:34:00.922013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:35.245 passed 00:05:35.245 Test: blockdev write read 8 blocks ...passed 00:05:35.246 Test: blockdev write read size > 128k ...passed 00:05:35.246 Test: blockdev write read invalid size ...passed 00:05:35.246 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.246 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.246 Test: blockdev write read max offset ...passed 00:05:35.246 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.246 Test: blockdev writev readv 8 blocks ...passed 00:05:35.246 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.246 Test: blockdev writev readv block ...passed 00:05:35.246 Test: blockdev writev readv size > 128k ...passed 00:05:35.246 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.246 Test: blockdev comparev and writev ...[2024-11-18 10:34:00.928570] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:35.246 separate metadata which is not supported yet. 00:05:35.246 passed 00:05:35.246 Test: blockdev nvme passthru rw ...passed 00:05:35.246 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:00.929136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:35.246 passed 00:05:35.246 Test: blockdev nvme admin passthru ...passed 00:05:35.246 Test: blockdev copy ...[2024-11-18 10:34:00.929274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:35.246 passed 00:05:35.246 00:05:35.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.246 suites 6 6 n/a 0 0 00:05:35.246 tests 138 138 138 0 0 00:05:35.246 asserts 893 893 893 0 n/a 00:05:35.246 00:05:35.246 Elapsed time = 1.246 seconds 00:05:35.246 0 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59841 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59841 ']' 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59841 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59841 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59841' 00:05:35.246 killing process with pid 59841 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59841 00:05:35.246 10:34:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59841 00:05:35.817 10:34:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:35.817 00:05:35.817 real 0m2.182s 00:05:35.817 user 0m5.591s 00:05:35.817 sys 0m0.285s 00:05:35.817 10:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.817 10:34:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:35.817 ************************************ 00:05:35.817 END TEST bdev_bounds 00:05:35.817 ************************************ 00:05:35.817 10:34:01 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:35.817 10:34:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:35.817 10:34:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.817 10:34:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:35.817 ************************************ 00:05:35.817 START TEST bdev_nbd 00:05:35.817 ************************************ 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59895 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59895 /var/tmp/spdk-nbd.sock 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59895 ']' 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:35.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.817 10:34:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:36.076 [2024-11-18 10:34:01.756381] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.076 [2024-11-18 10:34:01.756709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:36.076 [2024-11-18 10:34:01.920889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.335 [2024-11-18 10:34:02.042307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:36.903 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.164 1+0 records in 00:05:37.164 1+0 records out 00:05:37.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105627 s, 3.9 MB/s 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.164 10:34:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.426 1+0 records in 00:05:37.426 1+0 records out 00:05:37.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918904 s, 4.5 MB/s 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:37.426 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.747 1+0 records in 00:05:37.747 1+0 records out 00:05:37.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412351 s, 9.9 MB/s 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:37.747 1+0 records in 00:05:37.747 1+0 records out 00:05:37.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141385 s, 2.9 MB/s 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:37.747 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:38.008 1+0 records in 00:05:38.008 1+0 records out 00:05:38.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900579 s, 4.5 MB/s 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:38.008 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.269 10:34:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:38.269 1+0 records in 00:05:38.269 1+0 records out 00:05:38.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101079 s, 4.1 MB/s 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:38.269 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd0", 00:05:38.530 "bdev_name": "Nvme0n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd1", 00:05:38.530 "bdev_name": "Nvme1n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd2", 00:05:38.530 "bdev_name": "Nvme2n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd3", 00:05:38.530 "bdev_name": "Nvme2n2" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd4", 00:05:38.530 "bdev_name": "Nvme2n3" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd5", 00:05:38.530 "bdev_name": "Nvme3n1" 00:05:38.530 } 00:05:38.530 ]' 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd0", 00:05:38.530 "bdev_name": "Nvme0n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd1", 00:05:38.530 "bdev_name": "Nvme1n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd2", 00:05:38.530 "bdev_name": "Nvme2n1" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd3", 00:05:38.530 "bdev_name": "Nvme2n2" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd4", 00:05:38.530 "bdev_name": "Nvme2n3" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd5", 00:05:38.530 "bdev_name": "Nvme3n1" 00:05:38.530 } 00:05:38.530 ]' 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.530 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.793 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.055 10:34:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.315 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.575 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:39.836 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:40.098 /dev/nbd0 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.098 1+0 records in 00:05:40.098 1+0 records out 00:05:40.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571573 s, 7.2 MB/s 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.098 10:34:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:40.360 /dev/nbd1 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.360 1+0 records in 00:05:40.360 1+0 records out 00:05:40.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558159 s, 7.3 MB/s 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.360 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:40.360 /dev/nbd10 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.621 1+0 records in 00:05:40.621 1+0 records out 00:05:40.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322128 s, 12.7 MB/s 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:40.621 /dev/nbd11 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.621 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.622 1+0 records in 00:05:40.622 1+0 records out 00:05:40.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386485 s, 10.6 MB/s 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.622 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:40.883 /dev/nbd12 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:40.883 1+0 records in 00:05:40.883 1+0 records out 00:05:40.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825581 s, 5.0 MB/s 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:40.883 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:41.145 /dev/nbd13 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:41.145 1+0 records in 00:05:41.145 1+0 records out 00:05:41.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104049 s, 3.9 MB/s 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:41.145 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.146 10:34:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.407 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd0", 00:05:41.407 "bdev_name": "Nvme0n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd1", 00:05:41.407 "bdev_name": "Nvme1n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd10", 00:05:41.407 "bdev_name": "Nvme2n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd11", 00:05:41.407 "bdev_name": "Nvme2n2" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd12", 00:05:41.407 "bdev_name": "Nvme2n3" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd13", 00:05:41.407 "bdev_name": "Nvme3n1" 00:05:41.407 } 00:05:41.407 ]' 00:05:41.407 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.407 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd0", 00:05:41.407 "bdev_name": "Nvme0n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd1", 00:05:41.407 "bdev_name": "Nvme1n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd10", 00:05:41.407 "bdev_name": "Nvme2n1" 00:05:41.407 }, 00:05:41.407 { 00:05:41.407 "nbd_device": "/dev/nbd11", 00:05:41.407 "bdev_name": "Nvme2n2" 00:05:41.407 }, 00:05:41.407 { 00:05:41.408 "nbd_device": "/dev/nbd12", 00:05:41.408 "bdev_name": "Nvme2n3" 00:05:41.408 }, 00:05:41.408 { 00:05:41.408 "nbd_device": "/dev/nbd13", 00:05:41.408 "bdev_name": "Nvme3n1" 00:05:41.408 } 00:05:41.408 ]' 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.408 /dev/nbd1 00:05:41.408 /dev/nbd10 00:05:41.408 /dev/nbd11 00:05:41.408 /dev/nbd12 00:05:41.408 /dev/nbd13' 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.408 /dev/nbd1 00:05:41.408 /dev/nbd10 00:05:41.408 /dev/nbd11 00:05:41.408 /dev/nbd12 00:05:41.408 /dev/nbd13' 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:41.408 256+0 records in 00:05:41.408 256+0 records out 00:05:41.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773496 s, 136 MB/s 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.408 256+0 records in 00:05:41.408 256+0 records out 00:05:41.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102351 s, 10.2 MB/s 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.408 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.669 256+0 records in 00:05:41.669 256+0 records out 00:05:41.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175783 s, 6.0 MB/s 00:05:41.669 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.669 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:41.931 256+0 records in 00:05:41.931 256+0 records out 00:05:41.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173828 s, 6.0 MB/s 00:05:41.932 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.932 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:42.193 256+0 records in 00:05:42.193 256+0 records out 00:05:42.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22836 s, 4.6 MB/s 00:05:42.193 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.193 10:34:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:42.193 256+0 records in 00:05:42.193 256+0 records out 00:05:42.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183009 s, 5.7 MB/s 00:05:42.193 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.193 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:42.454 256+0 records in 00:05:42.454 256+0 records out 00:05:42.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179115 s, 5.9 MB/s 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.454 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.713 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.972 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.233 10:34:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.493 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.752 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.009 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.009 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.009 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.009 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.009 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:44.010 10:34:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:44.267 malloc_lvol_verify 00:05:44.267 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:44.525 1745bad0-8efc-4cf6-b862-b3a764ee1c45 00:05:44.525 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:44.784 4cb63f42-9508-4275-8626-45241b38cf9f 00:05:44.784 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:44.784 /dev/nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:45.044 mke2fs 1.47.0 (5-Feb-2023) 00:05:45.044 Discarding device blocks: 0/4096 done 00:05:45.044 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:45.044 00:05:45.044 Allocating group tables: 0/1 done 00:05:45.044 Writing inode tables: 0/1 done 00:05:45.044 Creating journal (1024 blocks): done 00:05:45.044 Writing superblocks and filesystem accounting information: 0/1 done 00:05:45.044 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:45.044 10:34:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59895 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59895 ']' 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59895 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59895 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.045 killing process with pid 59895 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59895' 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59895 00:05:45.045 10:34:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59895 00:05:45.979 10:34:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:45.979 00:05:45.979 real 0m9.950s 00:05:45.979 user 0m13.636s 00:05:45.979 sys 0m3.203s 00:05:45.979 10:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.979 10:34:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:45.979 ************************************ 00:05:45.979 END TEST bdev_nbd 00:05:45.979 ************************************ 00:05:45.979 skipping fio tests on NVMe due to multi-ns failures. 00:05:45.979 10:34:11 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:45.979 10:34:11 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:45.979 10:34:11 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:45.979 10:34:11 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:45.980 10:34:11 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:45.980 10:34:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:45.980 10:34:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.980 10:34:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:45.980 ************************************ 00:05:45.980 START TEST bdev_verify 00:05:45.980 ************************************ 00:05:45.980 10:34:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:45.980 [2024-11-18 10:34:11.746656] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:45.980 [2024-11-18 10:34:11.746743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:05:46.238 [2024-11-18 10:34:11.900015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.238 [2024-11-18 10:34:12.000152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.238 [2024-11-18 10:34:12.000237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.805 Running I/O for 5 seconds... 00:05:49.113 23680.00 IOPS, 92.50 MiB/s [2024-11-18T10:34:15.978Z] 20896.00 IOPS, 81.62 MiB/s [2024-11-18T10:34:16.920Z] 21077.33 IOPS, 82.33 MiB/s [2024-11-18T10:34:17.866Z] 20704.00 IOPS, 80.88 MiB/s [2024-11-18T10:34:17.866Z] 20211.20 IOPS, 78.95 MiB/s 00:05:51.982 Latency(us) 00:05:51.982 [2024-11-18T10:34:17.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:51.982 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0xbd0bd 00:05:51.982 Nvme0n1 : 5.09 1633.63 6.38 0.00 0.00 78101.37 12351.02 88322.36 00:05:51.982 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:51.982 Nvme0n1 : 5.07 1690.11 6.60 0.00 0.00 75501.30 10637.00 94371.84 00:05:51.982 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0xa0000 00:05:51.982 Nvme1n1 : 5.10 1632.61 6.38 0.00 0.00 78069.87 15022.87 79853.10 00:05:51.982 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0xa0000 length 0xa0000 00:05:51.982 Nvme1n1 : 5.08 1689.58 6.60 0.00 0.00 75194.10 12754.31 75820.11 00:05:51.982 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0x80000 00:05:51.982 Nvme2n1 : 5.10 1631.83 6.37 0.00 0.00 77975.72 16938.54 72593.72 00:05:51.982 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x80000 length 0x80000 00:05:51.982 Nvme2n1 : 5.08 1689.03 6.60 0.00 0.00 74954.07 14619.57 67350.84 00:05:51.982 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0x80000 00:05:51.982 Nvme2n2 : 5.10 1630.85 6.37 0.00 0.00 77677.31 16131.94 68157.44 00:05:51.982 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x80000 length 0x80000 00:05:51.982 Nvme2n2 : 5.08 1688.58 6.60 0.00 0.00 74849.37 13712.15 69367.34 00:05:51.982 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0x80000 00:05:51.982 Nvme2n3 : 5.10 1630.14 6.37 0.00 0.00 77478.61 14216.27 69367.34 00:05:51.982 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x80000 length 0x80000 00:05:51.982 Nvme2n3 : 5.09 1697.11 6.63 0.00 0.00 74378.74 4990.82 69367.34 00:05:51.982 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x0 length 0x20000 00:05:51.982 Nvme3n1 : 5.11 1629.71 6.37 0.00 0.00 77341.30 12502.25 69367.34 00:05:51.982 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:51.982 Verification LBA range: start 0x20000 length 0x20000 00:05:51.982 Nvme3n1 : 5.10 1706.35 6.67 0.00 0.00 73903.44 7208.96 76223.41 00:05:51.982 [2024-11-18T10:34:17.866Z] =================================================================================================================== 00:05:51.982 [2024-11-18T10:34:17.866Z] Total : 19949.52 77.93 0.00 0.00 76258.72 4990.82 94371.84 00:05:53.365 00:05:53.365 real 0m7.209s 00:05:53.365 user 0m13.471s 00:05:53.365 sys 0m0.229s 00:05:53.365 10:34:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.365 ************************************ 00:05:53.365 END TEST bdev_verify 00:05:53.365 ************************************ 00:05:53.365 10:34:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:53.365 10:34:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:53.365 10:34:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:53.365 10:34:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.365 10:34:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.365 ************************************ 00:05:53.365 START TEST bdev_verify_big_io 00:05:53.365 ************************************ 00:05:53.365 10:34:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:53.365 [2024-11-18 10:34:19.043793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:53.365 [2024-11-18 10:34:19.043994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:05:53.365 [2024-11-18 10:34:19.208393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.625 [2024-11-18 10:34:19.335239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.625 [2024-11-18 10:34:19.335259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.195 Running I/O for 5 seconds... 00:06:00.285 1456.00 IOPS, 91.00 MiB/s [2024-11-18T10:34:26.169Z] 2917.50 IOPS, 182.34 MiB/s 00:06:00.285 Latency(us) 00:06:00.285 [2024-11-18T10:34:26.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:00.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0xbd0b 00:06:00.285 Nvme0n1 : 5.77 111.47 6.97 0.00 0.00 1078175.63 17442.66 1090519.04 00:06:00.285 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:00.285 Nvme0n1 : 5.85 124.44 7.78 0.00 0.00 924216.87 81062.99 890483.00 00:06:00.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0xa000 00:06:00.285 Nvme1n1 : 5.77 114.90 7.18 0.00 0.00 1034060.91 115343.36 922746.88 00:06:00.285 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0xa000 length 0xa000 00:06:00.285 Nvme1n1 : 5.92 121.16 7.57 0.00 0.00 916668.44 64527.75 1755154.90 00:06:00.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0x8000 00:06:00.285 Nvme2n1 : 5.82 121.06 7.57 0.00 0.00 969416.36 42547.99 922746.88 00:06:00.285 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x8000 length 0x8000 00:06:00.285 Nvme2n1 : 5.95 131.32 8.21 0.00 0.00 820103.19 11695.66 1793871.56 00:06:00.285 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0x8000 00:06:00.285 Nvme2n2 : 5.91 126.17 7.89 0.00 0.00 900848.21 34482.02 935652.43 00:06:00.285 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x8000 length 0x8000 00:06:00.285 Nvme2n2 : 6.03 176.74 11.05 0.00 0.00 596091.97 437.96 1606741.07 00:06:00.285 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0x8000 00:06:00.285 Nvme2n3 : 5.92 125.75 7.86 0.00 0.00 873768.61 34885.32 961463.53 00:06:00.285 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x8000 length 0x8000 00:06:00.285 Nvme2n3 : 5.76 122.15 7.63 0.00 0.00 1005325.46 18551.73 1142141.24 00:06:00.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x0 length 0x2000 00:06:00.285 Nvme3n1 : 5.93 140.40 8.77 0.00 0.00 768503.05 1537.58 1045349.61 00:06:00.285 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.285 Verification LBA range: start 0x2000 length 0x2000 00:06:00.285 Nvme3n1 : 5.92 117.49 7.34 0.00 0.00 1011628.64 81062.99 1677721.60 00:06:00.285 [2024-11-18T10:34:26.169Z] =================================================================================================================== 00:06:00.285 [2024-11-18T10:34:26.169Z] Total : 1533.05 95.82 0.00 0.00 891053.01 437.96 1793871.56 00:06:01.660 00:06:01.660 real 0m8.532s 00:06:01.660 user 0m16.008s 00:06:01.660 sys 0m0.306s 00:06:01.660 10:34:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.660 ************************************ 00:06:01.660 END TEST bdev_verify_big_io 00:06:01.660 ************************************ 00:06:01.660 10:34:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:01.918 10:34:27 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:01.918 10:34:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:01.918 10:34:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.918 10:34:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.918 ************************************ 00:06:01.918 START TEST bdev_write_zeroes 00:06:01.918 ************************************ 00:06:01.918 10:34:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:01.918 [2024-11-18 10:34:27.650349] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:01.918 [2024-11-18 10:34:27.650510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60481 ] 00:06:02.176 [2024-11-18 10:34:27.822562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.176 [2024-11-18 10:34:27.920791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.742 Running I/O for 1 seconds... 00:06:03.674 58752.00 IOPS, 229.50 MiB/s 00:06:03.674 Latency(us) 00:06:03.674 [2024-11-18T10:34:29.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:03.675 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme0n1 : 1.02 9789.62 38.24 0.00 0.00 13048.11 5545.35 26214.40 00:06:03.675 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme1n1 : 1.02 9778.25 38.20 0.00 0.00 13047.02 8872.57 21374.82 00:06:03.675 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme2n1 : 1.02 9767.16 38.15 0.00 0.00 12994.41 7309.78 19963.27 00:06:03.675 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme2n2 : 1.02 9756.06 38.11 0.00 0.00 12992.08 7259.37 20265.75 00:06:03.675 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme2n3 : 1.02 9745.02 38.07 0.00 0.00 12972.27 5973.86 20568.22 00:06:03.675 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:03.675 Nvme3n1 : 1.03 9733.90 38.02 0.00 0.00 12966.66 5520.15 21878.94 00:06:03.675 [2024-11-18T10:34:29.559Z] =================================================================================================================== 00:06:03.675 [2024-11-18T10:34:29.559Z] Total : 58570.00 228.79 0.00 0.00 13003.42 5520.15 26214.40 00:06:04.609 00:06:04.609 real 0m2.677s 00:06:04.609 user 0m2.349s 00:06:04.609 sys 0m0.212s 00:06:04.609 10:34:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.609 ************************************ 00:06:04.609 END TEST bdev_write_zeroes 00:06:04.609 ************************************ 00:06:04.609 10:34:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:04.609 10:34:30 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:04.609 10:34:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:04.609 10:34:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.609 10:34:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.609 ************************************ 00:06:04.609 START TEST bdev_json_nonenclosed 00:06:04.609 ************************************ 00:06:04.609 10:34:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:04.609 [2024-11-18 10:34:30.361874] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.609 [2024-11-18 10:34:30.361984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:06:04.869 [2024-11-18 10:34:30.522695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.869 [2024-11-18 10:34:30.618292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.869 [2024-11-18 10:34:30.618367] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:04.869 [2024-11-18 10:34:30.618383] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:04.869 [2024-11-18 10:34:30.618392] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.130 00:06:05.130 real 0m0.493s 00:06:05.130 user 0m0.294s 00:06:05.130 sys 0m0.095s 00:06:05.130 ************************************ 00:06:05.130 10:34:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.130 10:34:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:05.130 END TEST bdev_json_nonenclosed 00:06:05.130 ************************************ 00:06:05.130 10:34:30 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:05.130 10:34:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:05.130 10:34:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.130 10:34:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.130 ************************************ 00:06:05.130 START TEST bdev_json_nonarray 00:06:05.130 ************************************ 00:06:05.130 10:34:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:05.130 [2024-11-18 10:34:30.915324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.130 [2024-11-18 10:34:30.915450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60554 ] 00:06:05.391 [2024-11-18 10:34:31.077963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.391 [2024-11-18 10:34:31.196779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.391 [2024-11-18 10:34:31.196880] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:05.391 [2024-11-18 10:34:31.196901] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:05.391 [2024-11-18 10:34:31.196912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.652 00:06:05.652 real 0m0.537s 00:06:05.652 user 0m0.320s 00:06:05.652 sys 0m0.112s 00:06:05.652 10:34:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.652 ************************************ 00:06:05.652 END TEST bdev_json_nonarray 00:06:05.652 ************************************ 00:06:05.652 10:34:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:05.652 10:34:31 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:05.652 00:06:05.652 real 0m36.115s 00:06:05.652 user 0m55.558s 00:06:05.652 sys 0m5.320s 00:06:05.652 10:34:31 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.652 10:34:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 ************************************ 00:06:05.652 END TEST blockdev_nvme 00:06:05.652 ************************************ 00:06:05.652 10:34:31 -- spdk/autotest.sh@209 -- # uname -s 00:06:05.652 10:34:31 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:05.652 10:34:31 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:05.652 10:34:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.653 10:34:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.653 10:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.653 ************************************ 00:06:05.653 START TEST blockdev_nvme_gpt 00:06:05.653 ************************************ 00:06:05.653 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:05.914 * Looking for test storage... 00:06:05.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.914 10:34:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.914 --rc genhtml_branch_coverage=1 00:06:05.914 --rc genhtml_function_coverage=1 00:06:05.914 --rc genhtml_legend=1 00:06:05.914 --rc geninfo_all_blocks=1 00:06:05.914 --rc geninfo_unexecuted_blocks=1 00:06:05.914 00:06:05.914 ' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.914 --rc genhtml_branch_coverage=1 00:06:05.914 --rc genhtml_function_coverage=1 00:06:05.914 --rc genhtml_legend=1 00:06:05.914 --rc geninfo_all_blocks=1 00:06:05.914 --rc geninfo_unexecuted_blocks=1 00:06:05.914 00:06:05.914 ' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.914 --rc genhtml_branch_coverage=1 00:06:05.914 --rc genhtml_function_coverage=1 00:06:05.914 --rc genhtml_legend=1 00:06:05.914 --rc geninfo_all_blocks=1 00:06:05.914 --rc geninfo_unexecuted_blocks=1 00:06:05.914 00:06:05.914 ' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.914 --rc genhtml_branch_coverage=1 00:06:05.914 --rc genhtml_function_coverage=1 00:06:05.914 --rc genhtml_legend=1 00:06:05.914 --rc geninfo_all_blocks=1 00:06:05.914 --rc geninfo_unexecuted_blocks=1 00:06:05.914 00:06:05.914 ' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60638 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60638 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60638 ']' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.914 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.914 10:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:05.915 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.915 10:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.915 [2024-11-18 10:34:31.769227] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.915 [2024-11-18 10:34:31.769366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:06:06.175 [2024-11-18 10:34:31.932090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.175 [2024-11-18 10:34:32.054357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.115 10:34:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.115 10:34:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:07.115 10:34:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:07.115 10:34:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:07.115 10:34:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:07.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.374 Waiting for block devices as requested 00:06:07.374 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.634 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.634 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.634 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:12.922 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:12.922 BYT; 00:06:12.922 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:12.922 BYT; 00:06:12.922 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:12.922 10:34:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:12.922 10:34:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:13.941 The operation has completed successfully. 00:06:13.941 10:34:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:14.878 The operation has completed successfully. 00:06:14.878 10:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.704 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.964 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:15.964 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.964 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:15.964 [] 00:06:15.964 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.964 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:15.964 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.964 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:16.224 10:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.224 10:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:16.224 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:16.485 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "85eb046f-f322-465e-be92-50c9d88acfd6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "85eb046f-f322-465e-be92-50c9d88acfd6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "223831c2-b6f5-4cd2-b6ad-d62d6d5eb912"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "223831c2-b6f5-4cd2-b6ad-d62d6d5eb912",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bf80fa67-83c3-4c93-9789-d8f7c18d5d87"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf80fa67-83c3-4c93-9789-d8f7c18d5d87",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9bb584a9-880a-473b-87fa-5a3a8805df14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bb584a9-880a-473b-87fa-5a3a8805df14",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2b6aae3b-7071-4182-8b26-f39891d1798d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2b6aae3b-7071-4182-8b26-f39891d1798d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:16.485 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:16.485 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:16.485 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:16.485 10:34:42 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60638 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60638 ']' 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60638 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60638 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.485 killing process with pid 60638 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60638' 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60638 00:06:16.485 10:34:42 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60638 00:06:17.862 10:34:43 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:17.862 10:34:43 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:17.862 10:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:17.862 10:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.862 10:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:17.862 ************************************ 00:06:17.862 START TEST bdev_hello_world 00:06:17.862 ************************************ 00:06:17.862 10:34:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:17.862 [2024-11-18 10:34:43.387511] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:17.862 [2024-11-18 10:34:43.387632] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:06:17.862 [2024-11-18 10:34:43.541886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.862 [2024-11-18 10:34:43.617833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.428 [2024-11-18 10:34:44.105517] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:18.428 [2024-11-18 10:34:44.105553] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:18.428 [2024-11-18 10:34:44.105570] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:18.428 [2024-11-18 10:34:44.107477] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:18.428 [2024-11-18 10:34:44.108026] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:18.428 [2024-11-18 10:34:44.108053] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:18.429 [2024-11-18 10:34:44.108214] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:18.429 00:06:18.429 [2024-11-18 10:34:44.108234] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:18.992 00:06:18.992 real 0m1.332s 00:06:18.992 user 0m1.080s 00:06:18.992 sys 0m0.149s 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.992 ************************************ 00:06:18.992 END TEST bdev_hello_world 00:06:18.992 ************************************ 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:18.992 10:34:44 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:18.992 10:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.992 10:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.992 10:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:18.992 ************************************ 00:06:18.992 START TEST bdev_bounds 00:06:18.992 ************************************ 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61300 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:18.992 Process bdevio pid: 61300 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61300' 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61300 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61300 ']' 00:06:18.992 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.993 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.993 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.993 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.993 10:34:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 [2024-11-18 10:34:44.770818] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:18.993 [2024-11-18 10:34:44.770914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:06:19.250 [2024-11-18 10:34:44.915316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.250 [2024-11-18 10:34:44.993762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.250 [2024-11-18 10:34:44.994053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.250 [2024-11-18 10:34:44.994056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.815 10:34:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.815 10:34:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:19.815 10:34:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:20.074 I/O targets: 00:06:20.074 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:20.074 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:20.074 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:20.074 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.074 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.074 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:20.074 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:20.074 00:06:20.074 00:06:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.074 http://cunit.sourceforge.net/ 00:06:20.074 00:06:20.074 00:06:20.074 Suite: bdevio tests on: Nvme3n1 00:06:20.074 Test: blockdev write read block ...passed 00:06:20.074 Test: blockdev write zeroes read block ...passed 00:06:20.074 Test: blockdev write zeroes read no split ...passed 00:06:20.074 Test: blockdev write zeroes read split ...passed 00:06:20.074 Test: blockdev write zeroes read split partial ...passed 00:06:20.074 Test: blockdev reset ...[2024-11-18 10:34:45.754895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:20.074 [2024-11-18 10:34:45.757620] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:20.074 passed 00:06:20.074 Test: blockdev write read 8 blocks ...passed 00:06:20.074 Test: blockdev write read size > 128k ...passed 00:06:20.074 Test: blockdev write read invalid size ...passed 00:06:20.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.074 Test: blockdev write read max offset ...passed 00:06:20.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.074 Test: blockdev writev readv 8 blocks ...passed 00:06:20.074 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.074 Test: blockdev writev readv block ...passed 00:06:20.074 Test: blockdev writev readv size > 128k ...passed 00:06:20.074 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.074 Test: blockdev comparev and writev ...[2024-11-18 10:34:45.766850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a04000 len:0x1000 00:06:20.074 [2024-11-18 10:34:45.766976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.074 passed 00:06:20.074 Test: blockdev nvme passthru rw ...passed 00:06:20.074 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:45.767612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.074 [2024-11-18 10:34:45.767687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.074 passed 00:06:20.074 Test: blockdev nvme admin passthru ...passed 00:06:20.074 Test: blockdev copy ...passed 00:06:20.074 Suite: bdevio tests on: Nvme2n3 00:06:20.074 Test: blockdev write read block ...passed 00:06:20.074 Test: blockdev write zeroes read block ...passed 00:06:20.074 Test: blockdev write zeroes read no split ...passed 00:06:20.074 Test: blockdev write zeroes read split ...passed 00:06:20.074 Test: blockdev write zeroes read split partial ...passed 00:06:20.074 Test: blockdev reset ...[2024-11-18 10:34:45.821810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.074 [2024-11-18 10:34:45.824617] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:20.074 passed 00:06:20.074 Test: blockdev write read 8 blocks ...passed 00:06:20.074 Test: blockdev write read size > 128k ...passed 00:06:20.074 Test: blockdev write read invalid size ...passed 00:06:20.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.074 Test: blockdev write read max offset ...passed 00:06:20.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.074 Test: blockdev writev readv 8 blocks ...passed 00:06:20.074 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.074 Test: blockdev writev readv block ...passed 00:06:20.074 Test: blockdev writev readv size > 128k ...passed 00:06:20.074 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.074 Test: blockdev comparev and writev ...[2024-11-18 10:34:45.831405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a02000 len:0x1000 00:06:20.074 [2024-11-18 10:34:45.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.074 passed 00:06:20.074 Test: blockdev nvme passthru rw ...passed 00:06:20.074 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:45.832246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.075 [2024-11-18 10:34:45.832324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.075 passed 00:06:20.075 Test: blockdev nvme admin passthru ...passed 00:06:20.075 Test: blockdev copy ...passed 00:06:20.075 Suite: bdevio tests on: Nvme2n2 00:06:20.075 Test: blockdev write read block ...passed 00:06:20.075 Test: blockdev write zeroes read block ...passed 00:06:20.075 Test: blockdev write zeroes read no split ...passed 00:06:20.075 Test: blockdev write zeroes read split ...passed 00:06:20.075 Test: blockdev write zeroes read split partial ...passed 00:06:20.075 Test: blockdev reset ...[2024-11-18 10:34:45.887388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.075 [2024-11-18 10:34:45.890078] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:20.075 passed 00:06:20.075 Test: blockdev write read 8 blocks ...passed 00:06:20.075 Test: blockdev write read size > 128k ...passed 00:06:20.075 Test: blockdev write read invalid size ...passed 00:06:20.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.075 Test: blockdev write read max offset ...passed 00:06:20.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.075 Test: blockdev writev readv 8 blocks ...passed 00:06:20.075 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.075 Test: blockdev writev readv block ...passed 00:06:20.075 Test: blockdev writev readv size > 128k ...passed 00:06:20.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.075 Test: blockdev comparev and writev ...[2024-11-18 10:34:45.896791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dce38000 len:0x1000 00:06:20.075 passed 00:06:20.075 Test: blockdev nvme passthru rw ...passed 00:06:20.075 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:45.896966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.075 [2024-11-18 10:34:45.897583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.075 passed 00:06:20.075 Test: blockdev nvme admin passthru ...[2024-11-18 10:34:45.897722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.075 passed 00:06:20.075 Test: blockdev copy ...passed 00:06:20.075 Suite: bdevio tests on: Nvme2n1 00:06:20.075 Test: blockdev write read block ...passed 00:06:20.075 Test: blockdev write zeroes read block ...passed 00:06:20.075 Test: blockdev write zeroes read no split ...passed 00:06:20.075 Test: blockdev write zeroes read split ...passed 00:06:20.334 Test: blockdev write zeroes read split partial ...passed 00:06:20.334 Test: blockdev reset ...[2024-11-18 10:34:45.961079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:20.334 [2024-11-18 10:34:45.963742] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:20.334 passed 00:06:20.334 Test: blockdev write read 8 blocks ...passed 00:06:20.334 Test: blockdev write read size > 128k ...passed 00:06:20.334 Test: blockdev write read invalid size ...passed 00:06:20.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.334 Test: blockdev write read max offset ...passed 00:06:20.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.334 Test: blockdev writev readv 8 blocks ...passed 00:06:20.334 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.334 Test: blockdev writev readv block ...passed 00:06:20.334 Test: blockdev writev readv size > 128k ...passed 00:06:20.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.334 Test: blockdev comparev and writev ...[2024-11-18 10:34:45.970918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dce34000 len:0x1000 00:06:20.334 [2024-11-18 10:34:45.971107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.334 passed 00:06:20.334 Test: blockdev nvme passthru rw ...passed 00:06:20.334 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:45.972116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:20.334 [2024-11-18 10:34:45.972317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:20.334 passed 00:06:20.334 Test: blockdev nvme admin passthru ...passed 00:06:20.334 Test: blockdev copy ...passed 00:06:20.334 Suite: bdevio tests on: Nvme1n1p2 00:06:20.334 Test: blockdev write read block ...passed 00:06:20.334 Test: blockdev write zeroes read block ...passed 00:06:20.334 Test: blockdev write zeroes read no split ...passed 00:06:20.334 Test: blockdev write zeroes read split ...passed 00:06:20.334 Test: blockdev write zeroes read split partial ...passed 00:06:20.334 Test: blockdev reset ...[2024-11-18 10:34:46.027719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:20.334 [2024-11-18 10:34:46.030175] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:20.334 passed 00:06:20.334 Test: blockdev write read 8 blocks ...passed 00:06:20.334 Test: blockdev write read size > 128k ...passed 00:06:20.334 Test: blockdev write read invalid size ...passed 00:06:20.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.334 Test: blockdev write read max offset ...passed 00:06:20.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.334 Test: blockdev writev readv 8 blocks ...passed 00:06:20.334 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.334 Test: blockdev writev readv block ...passed 00:06:20.334 Test: blockdev writev readv size > 128k ...passed 00:06:20.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.334 Test: blockdev comparev and writev ...[2024-11-18 10:34:46.039364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:06:20.334 Test: blockdev nvme passthru rw ...passed 00:06:20.334 Test: blockdev nvme passthru vendor specific ...passed 00:06:20.334 Test: blockdev nvme admin passthru ...passed 00:06:20.334 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2dce30000 len:0x1000 00:06:20.334 [2024-11-18 10:34:46.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.334 passed 00:06:20.334 Suite: bdevio tests on: Nvme1n1p1 00:06:20.334 Test: blockdev write read block ...passed 00:06:20.334 Test: blockdev write zeroes read block ...passed 00:06:20.334 Test: blockdev write zeroes read no split ...passed 00:06:20.334 Test: blockdev write zeroes read split ...passed 00:06:20.334 Test: blockdev write zeroes read split partial ...passed 00:06:20.334 Test: blockdev reset ...[2024-11-18 10:34:46.085322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:20.334 [2024-11-18 10:34:46.087683] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:20.334 passed 00:06:20.334 Test: blockdev write read 8 blocks ...passed 00:06:20.334 Test: blockdev write read size > 128k ...passed 00:06:20.334 Test: blockdev write read invalid size ...passed 00:06:20.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.334 Test: blockdev write read max offset ...passed 00:06:20.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.334 Test: blockdev writev readv 8 blocks ...passed 00:06:20.334 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.334 Test: blockdev writev readv block ...passed 00:06:20.334 Test: blockdev writev readv size > 128k ...passed 00:06:20.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.334 Test: blockdev comparev and writev ...[2024-11-18 10:34:46.094378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:06:20.334 Test: blockdev nvme passthru rw ...passed 00:06:20.334 Test: blockdev nvme passthru vendor specific ...passed 00:06:20.334 Test: blockdev nvme admin passthru ...passed 00:06:20.334 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b7c0e000 len:0x1000 00:06:20.334 [2024-11-18 10:34:46.094555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:20.334 passed 00:06:20.334 Suite: bdevio tests on: Nvme0n1 00:06:20.334 Test: blockdev write read block ...passed 00:06:20.334 Test: blockdev write zeroes read block ...passed 00:06:20.334 Test: blockdev write zeroes read no split ...passed 00:06:20.334 Test: blockdev write zeroes read split ...passed 00:06:20.334 Test: blockdev write zeroes read split partial ...passed 00:06:20.334 Test: blockdev reset ...[2024-11-18 10:34:46.136308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:20.334 [2024-11-18 10:34:46.138539] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:20.334 passed 00:06:20.334 Test: blockdev write read 8 blocks ...passed 00:06:20.334 Test: blockdev write read size > 128k ...passed 00:06:20.334 Test: blockdev write read invalid size ...passed 00:06:20.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:20.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:20.334 Test: blockdev write read max offset ...passed 00:06:20.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:20.334 Test: blockdev writev readv 8 blocks ...passed 00:06:20.334 Test: blockdev writev readv 30 x 1block ...passed 00:06:20.334 Test: blockdev writev readv block ...passed 00:06:20.334 Test: blockdev writev readv size > 128k ...passed 00:06:20.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:20.334 Test: blockdev comparev and writev ...passed 00:06:20.334 Test: blockdev nvme passthru rw ...[2024-11-18 10:34:46.144956] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:20.335 separate metadata which is not supported yet. 00:06:20.335 passed 00:06:20.335 Test: blockdev nvme passthru vendor specific ...[2024-11-18 10:34:46.145393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:20.335 [2024-11-18 10:34:46.145556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:20.335 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:20.335 passed 00:06:20.335 Test: blockdev copy ...passed 00:06:20.335 00:06:20.335 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.335 suites 7 7 n/a 0 0 00:06:20.335 tests 161 161 161 0 0 00:06:20.335 asserts 1025 1025 1025 0 n/a 00:06:20.335 00:06:20.335 Elapsed time = 1.156 seconds 00:06:20.335 0 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61300 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61300 ']' 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61300 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61300 00:06:20.335 killing process with pid 61300 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61300' 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61300 00:06:20.335 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61300 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:20.901 00:06:20.901 real 0m1.981s 00:06:20.901 user 0m5.174s 00:06:20.901 sys 0m0.238s 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:20.901 ************************************ 00:06:20.901 END TEST bdev_bounds 00:06:20.901 ************************************ 00:06:20.901 10:34:46 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:20.901 10:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:20.901 10:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.901 10:34:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.901 ************************************ 00:06:20.901 START TEST bdev_nbd 00:06:20.901 ************************************ 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61354 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:20.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61354 /var/tmp/spdk-nbd.sock 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61354 ']' 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.901 10:34:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 [2024-11-18 10:34:46.822799] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:21.160 [2024-11-18 10:34:46.823049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.160 [2024-11-18 10:34:46.978326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.466 [2024-11-18 10:34:47.072793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.060 1+0 records in 00:06:22.060 1+0 records out 00:06:22.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485852 s, 8.4 MB/s 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.060 10:34:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.319 1+0 records in 00:06:22.319 1+0 records out 00:06:22.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120979 s, 3.4 MB/s 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.319 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.579 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.579 1+0 records in 00:06:22.579 1+0 records out 00:06:22.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325284 s, 12.6 MB/s 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.580 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.839 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.839 1+0 records in 00:06:22.839 1+0 records out 00:06:22.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921291 s, 4.4 MB/s 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:22.840 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.099 1+0 records in 00:06:23.099 1+0 records out 00:06:23.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106245 s, 3.9 MB/s 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.099 10:34:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.359 1+0 records in 00:06:23.359 1+0 records out 00:06:23.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641387 s, 6.4 MB/s 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.359 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.360 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:23.623 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.624 1+0 records in 00:06:23.624 1+0 records out 00:06:23.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120589 s, 3.4 MB/s 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:23.624 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.884 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:23.884 { 00:06:23.884 "nbd_device": "/dev/nbd0", 00:06:23.884 "bdev_name": "Nvme0n1" 00:06:23.884 }, 00:06:23.884 { 00:06:23.884 "nbd_device": "/dev/nbd1", 00:06:23.884 "bdev_name": "Nvme1n1p1" 00:06:23.884 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd2", 00:06:23.885 "bdev_name": "Nvme1n1p2" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd3", 00:06:23.885 "bdev_name": "Nvme2n1" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd4", 00:06:23.885 "bdev_name": "Nvme2n2" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd5", 00:06:23.885 "bdev_name": "Nvme2n3" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd6", 00:06:23.885 "bdev_name": "Nvme3n1" 00:06:23.885 } 00:06:23.885 ]' 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd0", 00:06:23.885 "bdev_name": "Nvme0n1" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd1", 00:06:23.885 "bdev_name": "Nvme1n1p1" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd2", 00:06:23.885 "bdev_name": "Nvme1n1p2" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd3", 00:06:23.885 "bdev_name": "Nvme2n1" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd4", 00:06:23.885 "bdev_name": "Nvme2n2" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd5", 00:06:23.885 "bdev_name": "Nvme2n3" 00:06:23.885 }, 00:06:23.885 { 00:06:23.885 "nbd_device": "/dev/nbd6", 00:06:23.885 "bdev_name": "Nvme3n1" 00:06:23.885 } 00:06:23.885 ]' 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.885 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.144 10:34:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.405 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.665 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.926 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.187 10:34:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.449 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:25.711 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:25.972 /dev/nbd0 00:06:25.972 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.972 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.972 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.972 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.973 1+0 records in 00:06:25.973 1+0 records out 00:06:25.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958448 s, 4.3 MB/s 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:25.973 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:26.233 /dev/nbd1 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.233 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.234 1+0 records in 00:06:26.234 1+0 records out 00:06:26.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102224 s, 4.0 MB/s 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.234 10:34:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:26.234 /dev/nbd10 00:06:26.502 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.503 1+0 records in 00:06:26.503 1+0 records out 00:06:26.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120776 s, 3.4 MB/s 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.503 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:26.503 /dev/nbd11 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.765 1+0 records in 00:06:26.765 1+0 records out 00:06:26.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012061 s, 3.4 MB/s 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:26.765 /dev/nbd12 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.765 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.765 1+0 records in 00:06:26.765 1+0 records out 00:06:26.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008742 s, 4.7 MB/s 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:27.025 /dev/nbd13 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.025 1+0 records in 00:06:27.025 1+0 records out 00:06:27.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101153 s, 4.0 MB/s 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.025 10:34:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:27.287 /dev/nbd14 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.287 1+0 records in 00:06:27.287 1+0 records out 00:06:27.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118583 s, 3.5 MB/s 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.287 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd0", 00:06:27.547 "bdev_name": "Nvme0n1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd1", 00:06:27.547 "bdev_name": "Nvme1n1p1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd10", 00:06:27.547 "bdev_name": "Nvme1n1p2" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd11", 00:06:27.547 "bdev_name": "Nvme2n1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd12", 00:06:27.547 "bdev_name": "Nvme2n2" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd13", 00:06:27.547 "bdev_name": "Nvme2n3" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd14", 00:06:27.547 "bdev_name": "Nvme3n1" 00:06:27.547 } 00:06:27.547 ]' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd0", 00:06:27.547 "bdev_name": "Nvme0n1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd1", 00:06:27.547 "bdev_name": "Nvme1n1p1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd10", 00:06:27.547 "bdev_name": "Nvme1n1p2" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd11", 00:06:27.547 "bdev_name": "Nvme2n1" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd12", 00:06:27.547 "bdev_name": "Nvme2n2" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd13", 00:06:27.547 "bdev_name": "Nvme2n3" 00:06:27.547 }, 00:06:27.547 { 00:06:27.547 "nbd_device": "/dev/nbd14", 00:06:27.547 "bdev_name": "Nvme3n1" 00:06:27.547 } 00:06:27.547 ]' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.547 /dev/nbd1 00:06:27.547 /dev/nbd10 00:06:27.547 /dev/nbd11 00:06:27.547 /dev/nbd12 00:06:27.547 /dev/nbd13 00:06:27.547 /dev/nbd14' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.547 /dev/nbd1 00:06:27.547 /dev/nbd10 00:06:27.547 /dev/nbd11 00:06:27.547 /dev/nbd12 00:06:27.547 /dev/nbd13 00:06:27.547 /dev/nbd14' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.547 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:27.548 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.548 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:27.548 256+0 records in 00:06:27.548 256+0 records out 00:06:27.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560144 s, 187 MB/s 00:06:27.548 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.548 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.806 256+0 records in 00:06:27.806 256+0 records out 00:06:27.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224987 s, 4.7 MB/s 00:06:27.806 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.806 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.066 256+0 records in 00:06:28.066 256+0 records out 00:06:28.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138077 s, 7.6 MB/s 00:06:28.066 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.066 10:34:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:28.325 256+0 records in 00:06:28.325 256+0 records out 00:06:28.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229758 s, 4.6 MB/s 00:06:28.325 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.325 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:28.325 256+0 records in 00:06:28.325 256+0 records out 00:06:28.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178041 s, 5.9 MB/s 00:06:28.325 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.325 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:28.584 256+0 records in 00:06:28.584 256+0 records out 00:06:28.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219909 s, 4.8 MB/s 00:06:28.584 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.584 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:28.842 256+0 records in 00:06:28.842 256+0 records out 00:06:28.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184817 s, 5.7 MB/s 00:06:28.842 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.842 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:29.100 256+0 records in 00:06:29.100 256+0 records out 00:06:29.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200664 s, 5.2 MB/s 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.100 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.101 10:34:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.358 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.617 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.875 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.133 10:34:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.391 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.649 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:30.907 malloc_lvol_verify 00:06:30.907 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:31.210 4c118f0e-c7f1-44c6-9a73-c82d5c1167eb 00:06:31.211 10:34:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:31.496 5a5e57fe-b3f2-4052-ae69-c03f6cda2a37 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:31.496 /dev/nbd0 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:31.496 mke2fs 1.47.0 (5-Feb-2023) 00:06:31.496 Discarding device blocks: 0/4096 done 00:06:31.496 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:31.496 00:06:31.496 Allocating group tables: 0/1 done 00:06:31.496 Writing inode tables: 0/1 done 00:06:31.496 Creating journal (1024 blocks): done 00:06:31.496 Writing superblocks and filesystem accounting information: 0/1 done 00:06:31.496 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.496 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61354 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61354 ']' 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61354 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61354 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.754 killing process with pid 61354 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61354' 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61354 00:06:31.754 10:34:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61354 00:06:32.694 10:34:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:32.694 00:06:32.694 real 0m11.625s 00:06:32.694 user 0m16.066s 00:06:32.694 sys 0m3.766s 00:06:32.694 10:34:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.694 ************************************ 00:06:32.694 END TEST bdev_nbd 00:06:32.694 ************************************ 00:06:32.694 10:34:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:32.694 skipping fio tests on NVMe due to multi-ns failures. 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.694 10:34:58 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.694 10:34:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:32.694 10:34:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.694 10:34:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 ************************************ 00:06:32.694 START TEST bdev_verify 00:06:32.694 ************************************ 00:06:32.694 10:34:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.694 [2024-11-18 10:34:58.508339] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:32.694 [2024-11-18 10:34:58.508451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61774 ] 00:06:32.952 [2024-11-18 10:34:58.667625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.952 [2024-11-18 10:34:58.765658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.952 [2024-11-18 10:34:58.765756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.518 Running I/O for 5 seconds... 00:06:35.859 19968.00 IOPS, 78.00 MiB/s [2024-11-18T10:35:02.677Z] 20032.00 IOPS, 78.25 MiB/s [2024-11-18T10:35:03.619Z] 19904.00 IOPS, 77.75 MiB/s [2024-11-18T10:35:04.556Z] 19616.00 IOPS, 76.62 MiB/s [2024-11-18T10:35:04.556Z] 19366.40 IOPS, 75.65 MiB/s 00:06:38.672 Latency(us) 00:06:38.672 [2024-11-18T10:35:04.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.672 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0xbd0bd 00:06:38.672 Nvme0n1 : 5.08 1359.76 5.31 0.00 0.00 93893.11 17341.83 89935.56 00:06:38.672 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:38.672 Nvme0n1 : 5.07 1364.18 5.33 0.00 0.00 93457.95 22282.24 94371.84 00:06:38.672 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x4ff80 00:06:38.672 Nvme1n1p1 : 5.09 1358.78 5.31 0.00 0.00 93818.37 18955.03 83886.08 00:06:38.672 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:38.672 Nvme1n1p1 : 5.07 1363.69 5.33 0.00 0.00 93099.07 24702.03 78239.90 00:06:38.672 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x4ff7f 00:06:38.672 Nvme1n1p2 : 5.09 1356.66 5.30 0.00 0.00 93798.44 22988.01 83482.78 00:06:38.672 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:38.672 Nvme1n1p2 : 5.09 1371.39 5.36 0.00 0.00 92460.35 5268.09 75013.51 00:06:38.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x80000 00:06:38.672 Nvme2n1 : 5.10 1356.28 5.30 0.00 0.00 93624.48 22483.89 79853.10 00:06:38.672 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x80000 length 0x80000 00:06:38.672 Nvme2n1 : 5.09 1370.48 5.35 0.00 0.00 92323.87 7763.50 78239.90 00:06:38.672 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x80000 00:06:38.672 Nvme2n2 : 5.10 1355.93 5.30 0.00 0.00 93403.25 20164.92 74610.22 00:06:38.672 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x80000 length 0x80000 00:06:38.672 Nvme2n2 : 5.09 1369.66 5.35 0.00 0.00 92179.89 9376.69 77836.60 00:06:38.672 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x80000 00:06:38.672 Nvme2n3 : 5.10 1355.58 5.30 0.00 0.00 93250.17 19963.27 77433.30 00:06:38.672 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x80000 length 0x80000 00:06:38.672 Nvme2n3 : 5.10 1379.56 5.39 0.00 0.00 91496.06 7461.02 78239.90 00:06:38.672 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x0 length 0x20000 00:06:38.672 Nvme3n1 : 5.10 1355.23 5.29 0.00 0.00 93074.67 16736.89 81062.99 00:06:38.672 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.672 Verification LBA range: start 0x20000 length 0x20000 00:06:38.672 Nvme3n1 : 5.10 1379.18 5.39 0.00 0.00 91414.46 7763.50 80659.69 00:06:38.672 [2024-11-18T10:35:04.556Z] =================================================================================================================== 00:06:38.672 [2024-11-18T10:35:04.556Z] Total : 19096.37 74.60 0.00 0.00 92944.40 5268.09 94371.84 00:06:39.614 00:06:39.614 real 0m7.048s 00:06:39.614 user 0m13.123s 00:06:39.614 sys 0m0.230s 00:06:39.614 ************************************ 00:06:39.614 END TEST bdev_verify 00:06:39.614 ************************************ 00:06:39.614 10:35:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.614 10:35:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.877 10:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.877 10:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:39.877 10:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.877 10:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.877 ************************************ 00:06:39.877 START TEST bdev_verify_big_io 00:06:39.877 ************************************ 00:06:39.877 10:35:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.877 [2024-11-18 10:35:05.631401] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:39.877 [2024-11-18 10:35:05.631549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61871 ] 00:06:40.138 [2024-11-18 10:35:05.795349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.138 [2024-11-18 10:35:05.915183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.138 [2024-11-18 10:35:05.915196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.076 Running I/O for 5 seconds... 00:06:46.918 701.00 IOPS, 43.81 MiB/s [2024-11-18T10:35:12.802Z] 2345.00 IOPS, 146.56 MiB/s [2024-11-18T10:35:13.063Z] 2821.00 IOPS, 176.31 MiB/s 00:06:47.179 Latency(us) 00:06:47.179 [2024-11-18T10:35:13.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.179 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0xbd0b 00:06:47.179 Nvme0n1 : 5.57 105.25 6.58 0.00 0.00 1170464.01 30045.74 1206669.00 00:06:47.179 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:47.179 Nvme0n1 : 5.71 107.98 6.75 0.00 0.00 1130418.62 17442.66 1677721.60 00:06:47.179 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x4ff8 00:06:47.179 Nvme1n1p1 : 5.72 91.66 5.73 0.00 0.00 1308163.21 91548.75 2258471.38 00:06:47.179 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:47.179 Nvme1n1p1 : 5.89 113.06 7.07 0.00 0.00 1051757.27 34078.72 1703532.70 00:06:47.179 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x4ff7 00:06:47.179 Nvme1n1p2 : 5.83 87.83 5.49 0.00 0.00 1319390.13 103244.41 2181038.08 00:06:47.179 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:47.179 Nvme1n1p2 : 5.97 115.64 7.23 0.00 0.00 995416.59 53638.70 1729343.80 00:06:47.179 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x8000 00:06:47.179 Nvme2n1 : 5.89 117.67 7.35 0.00 0.00 965124.64 56461.78 1316366.18 00:06:47.179 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x8000 length 0x8000 00:06:47.179 Nvme2n1 : 5.97 115.96 7.25 0.00 0.00 959102.31 68964.04 1768060.46 00:06:47.179 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x8000 00:06:47.179 Nvme2n2 : 5.93 123.58 7.72 0.00 0.00 893299.90 39119.95 1058255.16 00:06:47.179 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x8000 length 0x8000 00:06:47.179 Nvme2n2 : 6.02 124.55 7.78 0.00 0.00 874872.26 50412.31 1806777.11 00:06:47.179 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x8000 00:06:47.179 Nvme2n3 : 6.02 131.65 8.23 0.00 0.00 811875.70 45976.02 1084066.26 00:06:47.179 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x8000 length 0x8000 00:06:47.179 Nvme2n3 : 6.07 134.89 8.43 0.00 0.00 783207.64 22685.54 1832588.21 00:06:47.179 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x0 length 0x2000 00:06:47.179 Nvme3n1 : 6.06 151.75 9.48 0.00 0.00 686314.06 598.65 1103424.59 00:06:47.179 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:47.179 Verification LBA range: start 0x2000 length 0x2000 00:06:47.179 Nvme3n1 : 6.14 174.79 10.92 0.00 0.00 589301.03 409.60 1871304.86 00:06:47.179 [2024-11-18T10:35:13.063Z] =================================================================================================================== 00:06:47.179 [2024-11-18T10:35:13.063Z] Total : 1696.26 106.02 0.00 0.00 926118.11 409.60 2258471.38 00:06:48.566 00:06:48.566 real 0m8.488s 00:06:48.566 user 0m15.996s 00:06:48.566 sys 0m0.290s 00:06:48.566 10:35:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.566 10:35:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 ************************************ 00:06:48.566 END TEST bdev_verify_big_io 00:06:48.566 ************************************ 00:06:48.566 10:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.566 10:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:48.566 10:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.566 10:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 ************************************ 00:06:48.566 START TEST bdev_write_zeroes 00:06:48.566 ************************************ 00:06:48.566 10:35:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.566 [2024-11-18 10:35:14.169175] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.566 [2024-11-18 10:35:14.169305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61987 ] 00:06:48.566 [2024-11-18 10:35:14.330773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.566 [2024-11-18 10:35:14.431510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.509 Running I/O for 1 seconds... 00:06:50.452 63168.00 IOPS, 246.75 MiB/s 00:06:50.452 Latency(us) 00:06:50.452 [2024-11-18T10:35:16.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.452 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme0n1 : 1.02 8991.96 35.12 0.00 0.00 14201.29 6856.07 27424.30 00:06:50.452 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme1n1p1 : 1.03 8980.52 35.08 0.00 0.00 14196.73 11141.12 25105.33 00:06:50.452 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme1n1p2 : 1.03 8969.53 35.04 0.00 0.00 14160.23 10889.06 24097.08 00:06:50.452 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme2n1 : 1.03 8959.41 35.00 0.00 0.00 14148.36 11141.12 23391.31 00:06:50.452 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme2n2 : 1.03 8949.15 34.96 0.00 0.00 14131.04 10132.87 22988.01 00:06:50.452 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme2n3 : 1.03 8939.08 34.92 0.00 0.00 14117.77 9326.28 23592.96 00:06:50.452 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.452 Nvme3n1 : 1.03 8929.00 34.88 0.00 0.00 14110.91 8973.39 25407.80 00:06:50.452 [2024-11-18T10:35:16.336Z] =================================================================================================================== 00:06:50.452 [2024-11-18T10:35:16.336Z] Total : 62718.64 244.99 0.00 0.00 14152.33 6856.07 27424.30 00:06:51.017 00:06:51.018 real 0m2.746s 00:06:51.018 user 0m2.421s 00:06:51.018 sys 0m0.208s 00:06:51.018 10:35:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.018 ************************************ 00:06:51.018 END TEST bdev_write_zeroes 00:06:51.018 ************************************ 00:06:51.018 10:35:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 10:35:16 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.276 10:35:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.276 10:35:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.276 10:35:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 START TEST bdev_json_nonenclosed 00:06:51.276 ************************************ 00:06:51.276 10:35:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.276 [2024-11-18 10:35:16.980175] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.276 [2024-11-18 10:35:16.980297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62029 ] 00:06:51.276 [2024-11-18 10:35:17.138293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.534 [2024-11-18 10:35:17.233964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.534 [2024-11-18 10:35:17.234043] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:51.534 [2024-11-18 10:35:17.234060] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.534 [2024-11-18 10:35:17.234069] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.534 00:06:51.534 real 0m0.490s 00:06:51.534 user 0m0.296s 00:06:51.534 sys 0m0.090s 00:06:51.534 10:35:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.534 ************************************ 00:06:51.534 END TEST bdev_json_nonenclosed 00:06:51.534 ************************************ 00:06:51.534 10:35:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:51.792 10:35:17 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.792 10:35:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.792 10:35:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.792 10:35:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.792 ************************************ 00:06:51.792 START TEST bdev_json_nonarray 00:06:51.792 ************************************ 00:06:51.792 10:35:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.792 [2024-11-18 10:35:17.529735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.792 [2024-11-18 10:35:17.529854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62060 ] 00:06:52.053 [2024-11-18 10:35:17.687380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.053 [2024-11-18 10:35:17.783105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.053 [2024-11-18 10:35:17.783191] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:52.053 [2024-11-18 10:35:17.783218] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:52.053 [2024-11-18 10:35:17.783227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.315 00:06:52.315 real 0m0.491s 00:06:52.315 user 0m0.300s 00:06:52.315 sys 0m0.088s 00:06:52.315 10:35:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.315 ************************************ 00:06:52.315 END TEST bdev_json_nonarray 00:06:52.315 ************************************ 00:06:52.315 10:35:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:52.315 10:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:52.315 10:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:52.315 10:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:52.315 10:35:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.315 10:35:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.315 10:35:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.315 ************************************ 00:06:52.315 START TEST bdev_gpt_uuid 00:06:52.315 ************************************ 00:06:52.315 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:06:52.315 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:52.315 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:52.315 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62086 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62086 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62086 ']' 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.316 10:35:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.316 [2024-11-18 10:35:18.093957] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:52.316 [2024-11-18 10:35:18.094090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:06:52.577 [2024-11-18 10:35:18.255624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.577 [2024-11-18 10:35:18.379556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.519 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.519 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:06:53.519 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.519 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.519 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.779 Some configs were skipped because the RPC state that can call them passed over. 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:53.779 { 00:06:53.779 "name": "Nvme1n1p1", 00:06:53.779 "aliases": [ 00:06:53.779 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:53.779 ], 00:06:53.779 "product_name": "GPT Disk", 00:06:53.779 "block_size": 4096, 00:06:53.779 "num_blocks": 655104, 00:06:53.779 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:53.779 "assigned_rate_limits": { 00:06:53.779 "rw_ios_per_sec": 0, 00:06:53.779 "rw_mbytes_per_sec": 0, 00:06:53.779 "r_mbytes_per_sec": 0, 00:06:53.779 "w_mbytes_per_sec": 0 00:06:53.779 }, 00:06:53.779 "claimed": false, 00:06:53.779 "zoned": false, 00:06:53.779 "supported_io_types": { 00:06:53.779 "read": true, 00:06:53.779 "write": true, 00:06:53.779 "unmap": true, 00:06:53.779 "flush": true, 00:06:53.779 "reset": true, 00:06:53.779 "nvme_admin": false, 00:06:53.779 "nvme_io": false, 00:06:53.779 "nvme_io_md": false, 00:06:53.779 "write_zeroes": true, 00:06:53.779 "zcopy": false, 00:06:53.779 "get_zone_info": false, 00:06:53.779 "zone_management": false, 00:06:53.779 "zone_append": false, 00:06:53.779 "compare": true, 00:06:53.779 "compare_and_write": false, 00:06:53.779 "abort": true, 00:06:53.779 "seek_hole": false, 00:06:53.779 "seek_data": false, 00:06:53.779 "copy": true, 00:06:53.779 "nvme_iov_md": false 00:06:53.779 }, 00:06:53.779 "driver_specific": { 00:06:53.779 "gpt": { 00:06:53.779 "base_bdev": "Nvme1n1", 00:06:53.779 "offset_blocks": 256, 00:06:53.779 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:53.779 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:53.779 "partition_name": "SPDK_TEST_first" 00:06:53.779 } 00:06:53.779 } 00:06:53.779 } 00:06:53.779 ]' 00:06:53.779 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:53.780 { 00:06:53.780 "name": "Nvme1n1p2", 00:06:53.780 "aliases": [ 00:06:53.780 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:53.780 ], 00:06:53.780 "product_name": "GPT Disk", 00:06:53.780 "block_size": 4096, 00:06:53.780 "num_blocks": 655103, 00:06:53.780 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:53.780 "assigned_rate_limits": { 00:06:53.780 "rw_ios_per_sec": 0, 00:06:53.780 "rw_mbytes_per_sec": 0, 00:06:53.780 "r_mbytes_per_sec": 0, 00:06:53.780 "w_mbytes_per_sec": 0 00:06:53.780 }, 00:06:53.780 "claimed": false, 00:06:53.780 "zoned": false, 00:06:53.780 "supported_io_types": { 00:06:53.780 "read": true, 00:06:53.780 "write": true, 00:06:53.780 "unmap": true, 00:06:53.780 "flush": true, 00:06:53.780 "reset": true, 00:06:53.780 "nvme_admin": false, 00:06:53.780 "nvme_io": false, 00:06:53.780 "nvme_io_md": false, 00:06:53.780 "write_zeroes": true, 00:06:53.780 "zcopy": false, 00:06:53.780 "get_zone_info": false, 00:06:53.780 "zone_management": false, 00:06:53.780 "zone_append": false, 00:06:53.780 "compare": true, 00:06:53.780 "compare_and_write": false, 00:06:53.780 "abort": true, 00:06:53.780 "seek_hole": false, 00:06:53.780 "seek_data": false, 00:06:53.780 "copy": true, 00:06:53.780 "nvme_iov_md": false 00:06:53.780 }, 00:06:53.780 "driver_specific": { 00:06:53.780 "gpt": { 00:06:53.780 "base_bdev": "Nvme1n1", 00:06:53.780 "offset_blocks": 655360, 00:06:53.780 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:53.780 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:53.780 "partition_name": "SPDK_TEST_second" 00:06:53.780 } 00:06:53.780 } 00:06:53.780 } 00:06:53.780 ]' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62086 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62086 ']' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62086 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.780 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62086 00:06:54.041 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.041 killing process with pid 62086 00:06:54.041 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.041 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62086' 00:06:54.041 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62086 00:06:54.041 10:35:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62086 00:06:55.429 00:06:55.429 real 0m3.276s 00:06:55.429 user 0m3.329s 00:06:55.429 sys 0m0.468s 00:06:55.429 10:35:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.429 ************************************ 00:06:55.429 END TEST bdev_gpt_uuid 00:06:55.429 ************************************ 00:06:55.429 10:35:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:55.689 10:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.950 Waiting for block devices as requested 00:06:56.231 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:56.231 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:56.231 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:56.514 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.786 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:01.786 10:35:27 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:01.786 10:35:27 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:01.786 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:01.786 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:01.786 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:01.786 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:01.786 10:35:27 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:01.786 00:07:01.786 real 0m55.953s 00:07:01.786 user 1m10.446s 00:07:01.786 sys 0m8.184s 00:07:01.786 10:35:27 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.786 10:35:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.786 ************************************ 00:07:01.786 END TEST blockdev_nvme_gpt 00:07:01.786 ************************************ 00:07:01.786 10:35:27 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:01.786 10:35:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.786 10:35:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.786 10:35:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.786 ************************************ 00:07:01.786 START TEST nvme 00:07:01.786 ************************************ 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:01.786 * Looking for test storage... 00:07:01.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.786 10:35:27 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.786 10:35:27 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.786 10:35:27 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.786 10:35:27 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.786 10:35:27 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.786 10:35:27 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:01.786 10:35:27 nvme -- scripts/common.sh@345 -- # : 1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.786 10:35:27 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.786 10:35:27 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@353 -- # local d=1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.786 10:35:27 nvme -- scripts/common.sh@355 -- # echo 1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.786 10:35:27 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@353 -- # local d=2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.786 10:35:27 nvme -- scripts/common.sh@355 -- # echo 2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.786 10:35:27 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.786 10:35:27 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.786 10:35:27 nvme -- scripts/common.sh@368 -- # return 0 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.786 --rc genhtml_branch_coverage=1 00:07:01.786 --rc genhtml_function_coverage=1 00:07:01.786 --rc genhtml_legend=1 00:07:01.786 --rc geninfo_all_blocks=1 00:07:01.786 --rc geninfo_unexecuted_blocks=1 00:07:01.786 00:07:01.786 ' 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.786 --rc genhtml_branch_coverage=1 00:07:01.786 --rc genhtml_function_coverage=1 00:07:01.786 --rc genhtml_legend=1 00:07:01.786 --rc geninfo_all_blocks=1 00:07:01.786 --rc geninfo_unexecuted_blocks=1 00:07:01.786 00:07:01.786 ' 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.786 --rc genhtml_branch_coverage=1 00:07:01.786 --rc genhtml_function_coverage=1 00:07:01.786 --rc genhtml_legend=1 00:07:01.786 --rc geninfo_all_blocks=1 00:07:01.786 --rc geninfo_unexecuted_blocks=1 00:07:01.786 00:07:01.786 ' 00:07:01.786 10:35:27 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.786 --rc genhtml_branch_coverage=1 00:07:01.786 --rc genhtml_function_coverage=1 00:07:01.786 --rc genhtml_legend=1 00:07:01.786 --rc geninfo_all_blocks=1 00:07:01.786 --rc geninfo_unexecuted_blocks=1 00:07:01.786 00:07:01.786 ' 00:07:01.786 10:35:27 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:02.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.869 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.869 10:35:28 nvme -- nvme/nvme.sh@79 -- # uname 00:07:02.869 10:35:28 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:02.869 10:35:28 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:02.869 10:35:28 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:02.869 Waiting for stub to ready for secondary processes... 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1075 -- # stubpid=62727 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62727 ]] 00:07:02.869 10:35:28 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:02.869 [2024-11-18 10:35:28.590864] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:02.869 [2024-11-18 10:35:28.591172] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:03.440 [2024-11-18 10:35:29.320341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.697 [2024-11-18 10:35:29.410973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.697 [2024-11-18 10:35:29.411095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.697 [2024-11-18 10:35:29.411096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.698 [2024-11-18 10:35:29.424518] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:03.698 [2024-11-18 10:35:29.424659] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.698 [2024-11-18 10:35:29.436984] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:03.698 [2024-11-18 10:35:29.437132] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:03.698 [2024-11-18 10:35:29.439616] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.698 [2024-11-18 10:35:29.439846] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:03.698 [2024-11-18 10:35:29.440012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:03.698 [2024-11-18 10:35:29.442281] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.698 [2024-11-18 10:35:29.442484] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:03.698 [2024-11-18 10:35:29.442558] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:03.698 [2024-11-18 10:35:29.444336] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:03.698 [2024-11-18 10:35:29.444467] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:03.698 [2024-11-18 10:35:29.444523] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:03.698 [2024-11-18 10:35:29.444557] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:03.698 [2024-11-18 10:35:29.444587] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:03.698 done. 00:07:03.698 10:35:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:03.698 10:35:29 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:03.698 10:35:29 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:03.698 10:35:29 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:03.698 10:35:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.698 10:35:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.698 ************************************ 00:07:03.698 START TEST nvme_reset 00:07:03.698 ************************************ 00:07:03.698 10:35:29 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:03.956 Initializing NVMe Controllers 00:07:03.956 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:03.956 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:03.956 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:03.956 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:03.956 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:03.956 ************************************ 00:07:03.956 END TEST nvme_reset 00:07:03.956 ************************************ 00:07:03.956 00:07:03.956 real 0m0.218s 00:07:03.956 user 0m0.083s 00:07:03.956 sys 0m0.084s 00:07:03.956 10:35:29 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.956 10:35:29 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:03.956 10:35:29 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:03.956 10:35:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.956 10:35:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.956 10:35:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.956 ************************************ 00:07:03.956 START TEST nvme_identify 00:07:03.956 ************************************ 00:07:03.956 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:03.956 10:35:29 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:03.956 10:35:29 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:03.956 10:35:29 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:03.956 10:35:29 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:03.956 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:03.956 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:03.956 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:04.217 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:04.217 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:04.217 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:04.217 10:35:29 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:04.217 10:35:29 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:04.217 ===================================================== 00:07:04.217 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:04.217 ===================================================== 00:07:04.217 Controller Capabilities/Features 00:07:04.217 ================================ 00:07:04.217 Vendor ID: 1b36 00:07:04.217 Subsystem Vendor ID: 1af4 00:07:04.218 Serial Number: 12340 00:07:04.218 Model Number: QEMU NVMe Ctrl 00:07:04.218 Firmware Version: 8.0.0 00:07:04.218 Recommended Arb Burst: 6 00:07:04.218 IEEE OUI Identifier: 00 54 52 00:07:04.218 Multi-path I/O 00:07:04.218 May have multiple subsystem ports: No 00:07:04.218 May have multiple controllers: No 00:07:04.218 Associated with SR-IOV VF: No 00:07:04.218 Max Data Transfer Size: 524288 00:07:04.218 Max Number of Namespaces: 256 00:07:04.218 Max Number of I/O Queues: 64 00:07:04.218 NVMe Specification Version (VS): 1.4 00:07:04.218 NVMe Specification Version (Identify): 1.4 00:07:04.218 Maximum Queue Entries: 2048 00:07:04.218 Contiguous Queues Required: Yes 00:07:04.218 Arbitration Mechanisms Supported 00:07:04.218 Weighted Round Robin: Not Supported 00:07:04.218 Vendor Specific: Not Supported 00:07:04.218 Reset Timeout: 7500 ms 00:07:04.218 Doorbell Stride: 4 bytes 00:07:04.218 NVM Subsystem Reset: Not Supported 00:07:04.218 Command Sets Supported 00:07:04.218 NVM Command Set: Supported 00:07:04.218 Boot Partition: Not Supported 00:07:04.218 Memory Page Size Minimum: 4096 bytes 00:07:04.218 Memory Page Size Maximum: 65536 bytes 00:07:04.218 Persistent Memory Region: Not Supported 00:07:04.218 Optional Asynchronous Events Supported 00:07:04.218 Namespace Attribute Notices: Supported 00:07:04.218 Firmware Activation Notices: Not Supported 00:07:04.218 ANA Change Notices: Not Supported 00:07:04.218 PLE Aggregate Log Change Notices: Not Supported 00:07:04.218 LBA Status Info Alert Notices: Not Supported 00:07:04.218 EGE Aggregate Log Change Notices: Not Supported 00:07:04.218 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.218 Zone Descriptor Change Notices: Not Supported 00:07:04.218 Discovery Log Change Notices: Not Supported 00:07:04.218 Controller Attributes 00:07:04.218 128-bit Host Identifier: Not Supported 00:07:04.218 Non-Operational Permissive Mode: Not Supported 00:07:04.218 NVM Sets: Not Supported 00:07:04.218 Read Recovery Levels: Not Supported 00:07:04.218 Endurance Groups: Not Supported 00:07:04.218 Predictable Latency Mode: Not Supported 00:07:04.218 Traffic Based Keep ALive: Not Supported 00:07:04.218 Namespace Granularity: Not Supported 00:07:04.218 SQ Associations: Not Supported 00:07:04.218 UUID List: Not Supported 00:07:04.218 Multi-Domain Subsystem: Not Supported 00:07:04.218 Fixed Capacity Management: Not Supported 00:07:04.218 Variable Capacity Management: Not Supported 00:07:04.218 Delete Endurance Group: Not Supported 00:07:04.218 Delete NVM Set: Not Supported 00:07:04.218 Extended LBA Formats Supported: Supported 00:07:04.218 Flexible Data Placement Supported: Not Supported 00:07:04.218 00:07:04.218 Controller Memory Buffer Support 00:07:04.218 ================================ 00:07:04.218 Supported: No 00:07:04.218 00:07:04.218 Persistent Memory Region Support 00:07:04.218 ================================ 00:07:04.218 Supported: No 00:07:04.218 00:07:04.218 Admin Command Set Attributes 00:07:04.218 ============================ 00:07:04.218 Security Send/Receive: Not Supported 00:07:04.218 Format NVM: Supported 00:07:04.218 Firmware Activate/Download: Not Supported 00:07:04.218 Namespace Management: Supported 00:07:04.218 Device Self-Test: Not Supported 00:07:04.218 Directives: Supported 00:07:04.218 NVMe-MI: Not Supported 00:07:04.218 Virtualization Management: Not Supported 00:07:04.218 Doorbell Buffer Config: Supported 00:07:04.218 Get LBA Status Capability: Not Supported 00:07:04.218 Command & Feature Lockdown Capability: Not Supported 00:07:04.218 Abort Command Limit: 4 00:07:04.218 Async Event Request Limit: 4 00:07:04.218 Number of Firmware Slots: N/A 00:07:04.218 Firmware Slot 1 Read-Only: N/A 00:07:04.218 Firmware Activation Without Reset: N/A 00:07:04.218 Multiple Update Detection Support: N/A 00:07:04.218 Firmware Update Granularity: No Information Provided 00:07:04.218 Per-Namespace SMART Log: Yes 00:07:04.218 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.218 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:04.218 Command Effects Log Page: Supported 00:07:04.218 Get Log Page Extended Data: Supported 00:07:04.218 Telemetry Log Pages: Not Supported 00:07:04.218 Persistent Event Log Pages: Not Supported 00:07:04.218 Supported Log Pages Log Page: May Support 00:07:04.218 Commands Supported & Effects Log Page: Not Supported 00:07:04.218 Feature Identifiers & Effects Log Page:May Support 00:07:04.218 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.218 Data Area 4 for Telemetry Log: Not Supported 00:07:04.218 Error Log Page Entries Supported: 1 00:07:04.218 Keep Alive: Not Supported 00:07:04.218 00:07:04.218 NVM Command Set Attributes 00:07:04.218 ========================== 00:07:04.218 Submission Queue Entry Size 00:07:04.218 Max: 64 00:07:04.218 Min: 64 00:07:04.218 Completion Queue Entry Size 00:07:04.218 Max: 16 00:07:04.218 Min: 16 00:07:04.218 Number of Namespaces: 256 00:07:04.218 Compare Command: Supported 00:07:04.218 Write Uncorrectable Command: Not Supported 00:07:04.218 Dataset Management Command: Supported 00:07:04.218 Write Zeroes Command: Supported 00:07:04.218 Set Features Save Field: Supported 00:07:04.218 Reservations: Not Supported 00:07:04.218 Timestamp: Supported 00:07:04.218 Copy: Supported 00:07:04.218 Volatile Write Cache: Present 00:07:04.218 Atomic Write Unit (Normal): 1 00:07:04.218 Atomic Write Unit (PFail): 1 00:07:04.218 Atomic Compare & Write Unit: 1 00:07:04.218 Fused Compare & Write: Not Supported 00:07:04.218 Scatter-Gather List 00:07:04.218 SGL Command Set: Supported 00:07:04.218 SGL Keyed: Not Supported 00:07:04.218 SGL Bit Bucket Descriptor: Not Supported 00:07:04.218 SGL Metadata Pointer: Not Supported 00:07:04.218 Oversized SGL: Not Supported 00:07:04.218 SGL Metadata Address: Not Supported 00:07:04.218 SGL Offset: Not Supported 00:07:04.218 Transport SGL Data Block: Not Supported 00:07:04.218 Replay Protected Memory Block: Not Supported 00:07:04.218 00:07:04.218 Firmware Slot Information 00:07:04.218 ========================= 00:07:04.218 Active slot: 1 00:07:04.218 Slot 1 Firmware Revision: 1.0 00:07:04.218 00:07:04.218 00:07:04.218 Commands Supported and Effects 00:07:04.218 ============================== 00:07:04.218 Admin Commands 00:07:04.218 -------------- 00:07:04.218 Delete I/O Submission Queue (00h): Supported 00:07:04.218 Create I/O Submission Queue (01h): Supported 00:07:04.218 Get Log Page (02h): Supported 00:07:04.218 Delete I/O Completion Queue (04h): Supported 00:07:04.218 Create I/O Completion Queue (05h): Supported 00:07:04.218 Identify (06h): Supported 00:07:04.218 Abort (08h): Supported 00:07:04.218 Set Features (09h): Supported 00:07:04.218 Get Features (0Ah): Supported 00:07:04.218 Asynchronous Event Request (0Ch): Supported 00:07:04.218 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.218 Directive Send (19h): Supported 00:07:04.218 Directive Receive (1Ah): Supported 00:07:04.218 Virtualization Management (1Ch): Supported 00:07:04.218 Doorbell Buffer Config (7Ch): Supported 00:07:04.218 Format NVM (80h): Supported LBA-Change 00:07:04.218 I/O Commands 00:07:04.218 ------------ 00:07:04.218 Flush (00h): Supported LBA-Change 00:07:04.218 Write (01h): Supported LBA-Change 00:07:04.218 Read (02h): Supported 00:07:04.218 Compare (05h): Supported 00:07:04.218 Write Zeroes (08h): Supported LBA-Change 00:07:04.218 Dataset Management (09h): Supported LBA-Change 00:07:04.218 Unknown (0Ch): Supported 00:07:04.218 Unknown (12h): Supported 00:07:04.218 Copy (19h): Supported LBA-Change 00:07:04.218 Unknown (1Dh): Supported LBA-Change 00:07:04.218 00:07:04.218 Error Log 00:07:04.218 ========= 00:07:04.218 00:07:04.218 Arbitration 00:07:04.218 =========== 00:07:04.218 Arbitration Burst: no limit 00:07:04.218 00:07:04.218 Power Management 00:07:04.218 ================ 00:07:04.218 Number of Power States: 1 00:07:04.218 Current Power State: Power State #0 00:07:04.218 Power State #0: 00:07:04.218 Max Power: 25.00 W 00:07:04.218 Non-Operational State: Operational 00:07:04.218 Entry Latency: 16 microseconds 00:07:04.218 Exit Latency: 4 microseconds 00:07:04.218 Relative Read Throughput: 0 00:07:04.218 Relative Read Latency: 0 00:07:04.219 Relative Write Throughput: 0 00:07:04.219 Relative Write Latency: 0 00:07:04.219 Idle Power: Not Reported 00:07:04.219 Active Power: Not Reported 00:07:04.219 Non-Operational Permissive Mode: Not Supported 00:07:04.219 00:07:04.219 Health Information 00:07:04.219 ================== 00:07:04.219 Critical Warnings: 00:07:04.219 Available Spare Space: OK 00:07:04.219 Temperature: OK 00:07:04.219 Device Reliability: OK 00:07:04.219 Read Only: No 00:07:04.219 Volatile Memory Backup: OK 00:07:04.219 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.219 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.219 Available Spare: 0% 00:07:04.219 Available Spare Threshold: 0% 00:07:04.219 Life Percentage Used: 0% 00:07:04.219 Data Units Read: 656 00:07:04.219 Data Units Written: 584 00:07:04.219 Host Read Commands: 34306 00:07:04.219 Host Write Commands: 34092 00:07:04.219 Controller Busy Time: 0 minutes 00:07:04.219 Power Cycles: 0 00:07:04.219 Power On Hours: 0 hours 00:07:04.219 Unsafe Shutdowns: 0 00:07:04.219 Unrecoverable Media Errors: 0 00:07:04.219 Lifetime Error Log Entries: 0 00:07:04.219 Warning Temperature Time: 0 minutes 00:07:04.219 Critical Temperature Time: 0 minutes 00:07:04.219 00:07:04.219 Number of Queues 00:07:04.219 ================ 00:07:04.219 Number of I/O Submission Queues: 64 00:07:04.219 Number of I/O Completion Queues: 64 00:07:04.219 00:07:04.219 ZNS Specific Controller Data 00:07:04.219 ============================ 00:07:04.219 Zone Append Size Limit: 0 00:07:04.219 00:07:04.219 00:07:04.219 Active Namespaces 00:07:04.219 ================= 00:07:04.219 Namespace ID:1 00:07:04.219 Error Recovery Timeout: Unlimited 00:07:04.219 Command Set Identifier: NVM (00h) 00:07:04.219 Deallocate: Supported 00:07:04.219 Deallocated/Unwritten Error: Supported 00:07:04.219 Deallocated Read Value: All 0x00 00:07:04.219 Deallocate in Write Zeroes: Not Supported 00:07:04.219 Deallocated Guard Field: 0xFFFF 00:07:04.219 Flush: Supported 00:07:04.219 Reservation: Not Supported 00:07:04.219 Metadata Transferred as: Separate Metadata Buffer 00:07:04.219 Namespace Sharing Capabilities: Private 00:07:04.219 Size (in LBAs): 1548666 (5GiB) 00:07:04.219 Capacity (in LBAs): 1548666 (5GiB) 00:07:04.219 Utilization (in LBAs): 1548666 (5GiB) 00:07:04.219 Thin Provisioning: Not Supported 00:07:04.219 Per-NS Atomic Units: No 00:07:04.219 Maximum Single Source Range Length: 128 00:07:04.219 Maximum Copy Length: 128 00:07:04.219 Maximum Source Range Count: 128 00:07:04.219 NGUID/EUI64 Never Reused: No 00:07:04.219 Namespace Write Protected: No 00:07:04.219 Number of LBA Formats: 8 00:07:04.219 Current LBA Format: LBA Format #07 00:07:04.219 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.219 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.219 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.219 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.219 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.219 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.219 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.219 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.219 00:07:04.219 NVM Specific Namespace Data 00:07:04.219 =========================== 00:07:04.219 Logical Block Storage Tag Mask: 0 00:07:04.219 Protection Information Capabilities: 00:07:04.219 16b Guard Protection Information Storage Tag Support: No 00:07:04.219 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.219 Storage Tag Check Read Support: No 00:07:04.219 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.219 ===================================================== 00:07:04.219 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:04.219 ===================================================== 00:07:04.219 Controller Capabilities/Features 00:07:04.219 ================================ 00:07:04.219 Vendor ID: 1b36 00:07:04.219 Subsystem Vendor ID: 1af4 00:07:04.219 Serial Number: 12341 00:07:04.219 Model Number: QEMU NVMe Ctrl 00:07:04.219 Firmware Version: 8.0.0 00:07:04.219 Recommended Arb Burst: 6 00:07:04.219 IEEE OUI Identifier: 00 54 52 00:07:04.219 Multi-path I/O 00:07:04.219 May have multiple subsystem ports: No 00:07:04.219 May have multiple controllers: No 00:07:04.219 Associated with SR-IOV VF: No 00:07:04.219 Max Data Transfer Size: 524288 00:07:04.219 Max Number of Namespaces: 256 00:07:04.219 Max Number of I/O Queues: 64 00:07:04.219 NVMe Specification Version (VS): 1.4 00:07:04.219 NVMe Specification Version (Identify): 1.4 00:07:04.219 Maximum Queue Entries: 2048 00:07:04.219 Contiguous Queues Required: Yes 00:07:04.219 Arbitration Mechanisms Supported 00:07:04.219 Weighted Round Robin: Not Supported 00:07:04.219 Vendor Specific: Not Supported 00:07:04.219 Reset Timeout: 7500 ms 00:07:04.219 Doorbell Stride: 4 bytes 00:07:04.219 NVM Subsystem Reset: Not Supported 00:07:04.219 Command Sets Supported 00:07:04.219 NVM Command Set: Supported 00:07:04.219 Boot Partition: Not Supported 00:07:04.219 Memory Page Size Minimum: 4096 bytes 00:07:04.219 Memory Page Size Maximum: 65536 bytes 00:07:04.219 Persistent Memory Region: Not Supported 00:07:04.219 Optional Asynchronous Events Supported 00:07:04.219 Namespace Attribute Notices: Supported 00:07:04.219 Firmware Activation Notices: Not Supported 00:07:04.219 ANA Change Notices: Not Supported 00:07:04.219 PLE Aggregate Log Change Notices: Not Supported 00:07:04.219 LBA Status Info Alert Notices: Not Supported 00:07:04.219 EGE Aggregate Log Change Notices: Not Supported 00:07:04.219 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.219 Zone Descriptor Change Notices: Not Supported 00:07:04.219 Discovery Log Change Notices: Not Supported 00:07:04.219 Controller Attributes 00:07:04.219 128-bit Host Identifier: Not Supported 00:07:04.219 Non-Operational Permissive Mode: Not Supported 00:07:04.219 NVM Sets: Not Supported 00:07:04.219 Read Recovery Levels: Not Supported 00:07:04.219 Endurance Groups: Not Supported 00:07:04.219 Predictable Latency Mode: Not Supported 00:07:04.219 Traffic Based Keep ALive: Not Supported 00:07:04.219 Namespace Granularity: Not Supported 00:07:04.219 SQ Associations: Not Supported 00:07:04.219 UUID List: Not Supported 00:07:04.219 Multi-Domain Subsystem: Not Supported 00:07:04.219 Fixed Capacity Management: Not Supported 00:07:04.219 Variable Capacity Management: Not Supported 00:07:04.219 Delete Endurance Group: Not Supported 00:07:04.219 Delete NVM Set: Not Supported 00:07:04.219 Extended LBA Formats Supported: Supported 00:07:04.219 Flexible Data Placement Supported: Not Supported 00:07:04.219 00:07:04.219 Controller Memory Buffer Support 00:07:04.219 ================================ 00:07:04.219 Supported: No 00:07:04.219 00:07:04.219 Persistent Memory Region Support 00:07:04.219 ================================ 00:07:04.219 Supported: No 00:07:04.219 00:07:04.219 Admin Command Set Attributes 00:07:04.219 ============================ 00:07:04.219 Security Send/Receive: Not Supported 00:07:04.219 Format NVM: Supported 00:07:04.219 Firmware Activate/Download: Not Supported 00:07:04.219 Namespace Management: Supported 00:07:04.219 Device Self-Test: Not Supported 00:07:04.219 Directives: Supported 00:07:04.219 NVMe-MI: Not Supported 00:07:04.219 Virtualization Management: Not Supported 00:07:04.219 Doorbell Buffer Config: Supported 00:07:04.219 Get LBA Status Capability: Not Supported 00:07:04.219 Command & Feature Lockdown Capability: Not Supported 00:07:04.219 Abort Command Limit: 4 00:07:04.219 Async Event Request Limit: 4 00:07:04.219 Number of Firmware Slots: N/A 00:07:04.219 Firmware Slot 1 Read-Only: N/A 00:07:04.219 Firmware Activation Without Reset: N/A 00:07:04.219 Multiple Update Detection Support: N/A 00:07:04.219 Firmware Update Granularity: No Information Provided 00:07:04.219 Per-Namespace SMART Log: Yes 00:07:04.219 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.219 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:04.219 Command Effects Log Page: Supported 00:07:04.220 Get Log Page Extended Data: Supported 00:07:04.220 Telemetry Log Pages: Not Supported 00:07:04.220 Persistent Event Log Pages: Not Supported 00:07:04.220 Supported Log Pages Log Page: May Support 00:07:04.220 Commands Supported & Effects Log Page: Not Supported 00:07:04.220 Feature Identifiers & Effects Log Page:May Support 00:07:04.220 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.220 Data Area 4 for Telemetry Log: Not Supported 00:07:04.220 Error Log Page Entries Supported: 1 00:07:04.220 Keep Alive: Not Supported 00:07:04.220 00:07:04.220 NVM Command Set Attributes 00:07:04.220 ========================== 00:07:04.220 Submission Queue Entry Size 00:07:04.220 Max: 64 00:07:04.220 Min: 64 00:07:04.220 Completion Queue Entry Size 00:07:04.220 Max: 16 00:07:04.220 Min: 16 00:07:04.220 Number of Namespaces: 256 00:07:04.220 Compare Command: Supported 00:07:04.220 Write Uncorrectable Command: Not Supported 00:07:04.220 Dataset Management Command: Supported 00:07:04.220 Write Zeroes Command: Supported 00:07:04.220 Set Features Save Field: Supported 00:07:04.220 Reservations: Not Supported 00:07:04.220 Timestamp: Supported 00:07:04.220 Copy: Supported 00:07:04.220 Volatile Write Cache: Present 00:07:04.220 Atomic Write Unit (Normal): 1 00:07:04.220 Atomic Write Unit (PFail): 1 00:07:04.220 Atomic Compare & Write Unit: 1 00:07:04.220 Fused Compare & Write: Not Supported 00:07:04.220 Scatter-Gather List 00:07:04.220 SGL Command Set: Supported 00:07:04.220 SGL Keyed: Not Supported 00:07:04.220 SGL Bit Bucket Descriptor: Not Supported 00:07:04.220 SGL Metadata Pointer: Not Supported 00:07:04.220 Oversized SGL: Not Supported 00:07:04.220 SGL Metadata Address: Not Supported 00:07:04.220 SGL Offset: Not Supported 00:07:04.220 Transport SGL Data Block: Not Supported 00:07:04.220 Replay Protected Memory Block: Not Supported 00:07:04.220 00:07:04.220 Firmware Slot Information 00:07:04.220 ========================= 00:07:04.220 Active slot: 1 00:07:04.220 Slot 1 Firmware Revision: 1.0 00:07:04.220 00:07:04.220 00:07:04.220 Commands Supported and Effects 00:07:04.220 ============================== 00:07:04.220 Admin Commands 00:07:04.220 -------------- 00:07:04.220 Delete I/O Submission Queue (00h): Supported 00:07:04.220 Create I/O Submission Queue (01h): Supported 00:07:04.220 Get Log Page (02h): Supported 00:07:04.220 Delete I/O Completion Queue (04h): Supported 00:07:04.220 Create I/O Completion Queue (05h): Supported 00:07:04.220 Identify (06h): Supported 00:07:04.220 Abort (08h): Supported 00:07:04.220 Set Features (09h): Supported 00:07:04.220 Get Features (0Ah): Supported 00:07:04.220 Asynchronous Event Request (0Ch): Supported 00:07:04.220 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.220 Directive Send (19h): Supported 00:07:04.220 Directive Receive (1Ah): Supported 00:07:04.220 Virtualization Management (1Ch): Supported 00:07:04.220 Doorbell Buffer Config (7Ch): Supported 00:07:04.220 Format NVM (80h): Supported LBA-Change 00:07:04.220 I/O Commands 00:07:04.220 ------------ 00:07:04.220 Flush (00h): Supported LBA-Change 00:07:04.220 Write (01h): Supported LBA-Change 00:07:04.220 Read (02h): Supported 00:07:04.220 Compare (05h): Supported 00:07:04.220 Write Zeroes (08h): Supported LBA-Change 00:07:04.220 Dataset Management (09h): Supported LBA-Change 00:07:04.220 Unknown (0Ch): Supported 00:07:04.220 Unknown (12h): Supported 00:07:04.220 Copy (19h): Supported LBA-Change 00:07:04.220 Unknown (1Dh): Supported LBA-Change 00:07:04.220 00:07:04.220 Error Log 00:07:04.220 ========= 00:07:04.220 00:07:04.220 Arbitration 00:07:04.220 =========== 00:07:04.220 Arbitration Burst: no limit 00:07:04.220 00:07:04.220 Power Management 00:07:04.220 ================ 00:07:04.220 Number of Power States: 1 00:07:04.220 Current Power State: Power State #0 00:07:04.220 Power State #0: 00:07:04.220 Max Power: 25.00 W 00:07:04.220 Non-Operational State: Operational 00:07:04.220 Entry Latency: 16 microseconds 00:07:04.220 Exit Latency: 4 microseconds 00:07:04.220 Relative Read Throughput: 0 00:07:04.220 Relative Read Latency: 0 00:07:04.220 Relative Write Throughput: 0 00:07:04.220 Relative Write Latency: 0 00:07:04.220 Idle Power: Not Reported 00:07:04.220 Active Power: Not Reported 00:07:04.220 Non-Operational Permissive Mode: Not Supported 00:07:04.220 00:07:04.220 Health Information 00:07:04.220 ================== 00:07:04.220 Critical Warnings: 00:07:04.220 Available Spare Space: OK 00:07:04.220 Temperature: OK 00:07:04.220 Device Reliability: OK 00:07:04.220 Read Only: No 00:07:04.220 Volatile Memory Backup: OK 00:07:04.220 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.220 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.220 Available Spare: 0% 00:07:04.220 Available Spare Threshold: 0% 00:07:04.220 Life Percentage Used: 0% 00:07:04.220 Data Units Read: 986 00:07:04.220 Data Units Written: 853 00:07:04.220 Host Read Commands: 50417 00:07:04.220 Host Write Commands: 49211 00:07:04.220 Controller Busy Time: 0 minutes 00:07:04.220 Power Cycles: 0 00:07:04.220 Power On Hours: 0 hours 00:07:04.220 Unsafe Shutdowns: 0 00:07:04.220 Unrecoverable Media Errors: 0 00:07:04.220 Lifetime Error Log Entries: 0 00:07:04.220 Warning Temperature Time: 0 minutes 00:07:04.220 Critical Temperature Time: 0 minutes 00:07:04.220 00:07:04.220 Number of Queues 00:07:04.220 ================ 00:07:04.220 Number of I/O Submission Queues: 64 00:07:04.220 Number of I/O Completion Queues: 64 00:07:04.220 00:07:04.220 ZNS Specific Controller Data 00:07:04.220 ============================ 00:07:04.220 Zone Append Size Limit: 0 00:07:04.220 00:07:04.220 00:07:04.220 Active Namespaces 00:07:04.220 ================= 00:07:04.220 Namespace ID:1 00:07:04.220 Error Recovery Timeout: Unlimited 00:07:04.220 Command Set Identifier: NVM (00h) 00:07:04.220 Deallocate: Supported 00:07:04.220 Deallocated/Unwritten Error: Supported 00:07:04.220 Deallocated Read Value: All 0x00 00:07:04.220 Deallocate in Write Zeroes: Not Supported 00:07:04.220 Deallocated Guard Field: 0xFFFF 00:07:04.220 Flush: Supported 00:07:04.220 Reservation: Not Supported 00:07:04.220 Namespace Sharing Capabilities: Private 00:07:04.220 Size (in LBAs): 1310720 (5GiB) 00:07:04.220 Capacity (in LBAs): 1310720 (5GiB) 00:07:04.220 Utilization (in LBAs): 1310720 (5GiB) 00:07:04.220 Thin Provisioning: Not Supported 00:07:04.220 Per-NS Atomic Units: No 00:07:04.220 Maximum Single Source Range Length: 128 00:07:04.220 Maximum Copy Length: 128 00:07:04.220 Maximum Source Range Count: 128 00:07:04.220 NGUID/EUI64 Never Reused: No 00:07:04.220 Namespace Write Protected: No 00:07:04.220 Number of LBA Formats: 8 00:07:04.220 Current LBA Format: LBA Format #04 00:07:04.220 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.220 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.220 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.220 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.220 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.220 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.220 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.220 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.220 00:07:04.220 NVM Specific Namespace Data 00:07:04.220 =========================== 00:07:04.220 Logical Block Storage Tag Mask: 0 00:07:04.220 Protection Information Capabilities: 00:07:04.220 16b Guard Protection Information Storage Tag Support: No 00:07:04.220 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.220 Storage Tag Check Read Support: No 00:07:04.220 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.220 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.221 ===================================================== 00:07:04.221 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:04.221 ===================================================== 00:07:04.221 Controller Capabilities/Features 00:07:04.221 ================================ 00:07:04.221 Vendor ID: 1b36 00:07:04.221 Subsystem Vendor ID: 1af4 00:07:04.221 Serial Number: 12343 00:07:04.221 Model Number: QEMU NVMe Ctrl 00:07:04.221 Firmware Version: 8.0.0 00:07:04.221 Recommended Arb Burst: 6 00:07:04.221 IEEE OUI Identifier: 00 54 52 00:07:04.221 Multi-path I/O 00:07:04.221 May have multiple subsystem ports: No 00:07:04.221 May have multiple controllers: Yes 00:07:04.221 Associated with SR-IOV VF: No 00:07:04.221 Max Data Transfer Size: 524288 00:07:04.221 Max Number of Namespaces: 256 00:07:04.221 Max Number of I/O Queues: 64 00:07:04.221 NVMe Specification Version (VS): 1.4 00:07:04.221 NVMe Specification Version (Identify): 1.4 00:07:04.221 Maximum Queue Entries: 2048 00:07:04.221 Contiguous Queues Required: Yes 00:07:04.221 Arbitration Mechanisms Supported 00:07:04.221 Weighted Round Robin: Not Supported 00:07:04.221 Vendor Specific: Not Supported 00:07:04.221 Reset Timeout: 7500 ms 00:07:04.221 Doorbell Stride: 4 bytes 00:07:04.221 NVM Subsystem Reset: Not Supported 00:07:04.221 Command Sets Supported 00:07:04.221 NVM Command Set: Supported 00:07:04.221 Boot Partition: Not Supported 00:07:04.221 Memory Page Size Minimum: 4096 bytes 00:07:04.221 Memory Page Size Maximum: 65536 bytes 00:07:04.221 Persistent Memory Region: Not Supported 00:07:04.221 Optional Asynchronous Events Supported 00:07:04.221 Namespace Attribute Notices: Supported 00:07:04.221 Firmware Activation Notices: Not Supported 00:07:04.221 ANA Change Notices: Not Supported 00:07:04.221 PLE Aggregate Log Change Notices: Not Supported 00:07:04.221 LBA Status Info Alert Notices: Not Supported 00:07:04.221 EGE Aggregate Log Change Notices: Not Supported 00:07:04.221 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.221 Zone Descriptor Change Notices: Not Supported 00:07:04.221 Discovery Log Change Notices: Not Supported 00:07:04.221 Controller Attributes 00:07:04.221 128-bit Host Identifier: Not Supported 00:07:04.221 Non-Operational Permissive Mode: Not Supported 00:07:04.221 NVM Sets: Not Supported 00:07:04.221 Read Recovery Levels: Not Supported 00:07:04.221 Endurance Groups: Supported 00:07:04.221 Predictable Latency Mode: Not Supported 00:07:04.221 Traffic Based Keep ALive: Not Supported 00:07:04.221 Namespace Granularity: Not Supported 00:07:04.221 SQ Associations: Not Supported 00:07:04.221 UUID List: Not Supported 00:07:04.221 Multi-Domain Subsystem: Not Supported 00:07:04.221 Fixed Capacity Management: Not Supported 00:07:04.221 Variable Capacity Management: Not Supported 00:07:04.221 Delete Endurance Group: Not Supported 00:07:04.221 Delete NVM Set: Not Supported 00:07:04.221 Extended LBA Formats Supported: Supported 00:07:04.221 Flexible Data Placement Supported: Supported 00:07:04.221 00:07:04.221 Controller Memory Buffer Support 00:07:04.221 ================================ 00:07:04.221 Supported: No 00:07:04.221 00:07:04.221 Persistent Memory Region Support 00:07:04.221 ================================ 00:07:04.221 Supported: No 00:07:04.221 00:07:04.221 Admin Command Set Attributes 00:07:04.221 ============================ 00:07:04.221 Security Send/Receive: Not Supported 00:07:04.221 Format NVM: Supported 00:07:04.221 Firmware Activate/Download: Not Supported 00:07:04.221 Namespace Management: Supported 00:07:04.221 Device Self-Test: Not Supported 00:07:04.221 Directives: Supported 00:07:04.221 NVMe-MI: Not Supported 00:07:04.221 Virtualization Management: Not Supported 00:07:04.221 Doorbell Buffer Config: Supported 00:07:04.221 Get LBA Status Capability: Not Supported 00:07:04.221 Command & Feature Lockdown Capability: Not Supported 00:07:04.221 Abort Command Limit: 4 00:07:04.221 Async Event Request Limit: 4 00:07:04.221 Number of Firmware Slots: N/A 00:07:04.221 Firmware Slot 1 Read-Only: N/A 00:07:04.221 Firmware Activation Without Reset: N/A 00:07:04.221 Multiple Update Detection Support: N/A 00:07:04.221 Firmware Update Granularity: No Information Provided 00:07:04.221 Per-Namespace SMART Log: Yes 00:07:04.221 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.221 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:04.221 Command Effects Log Page: Supported 00:07:04.221 Get Log Page Extended Data: Supported 00:07:04.221 Telemetry Log Pages: Not Supported 00:07:04.221 Persistent Event Log Pages: Not Supported 00:07:04.221 Supported Log Pages Log Page: May Support 00:07:04.221 Commands Supported & Effects Log Page: Not Supported 00:07:04.221 Feature Identifiers & Effect[2024-11-18 10:35:30.068396] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62748 terminated unexpected 00:07:04.221 [2024-11-18 10:35:30.069437] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62748 terminated unexpected 00:07:04.221 [2024-11-18 10:35:30.070025] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62748 terminated unexpected 00:07:04.221 s Log Page:May Support 00:07:04.221 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.221 Data Area 4 for Telemetry Log: Not Supported 00:07:04.221 Error Log Page Entries Supported: 1 00:07:04.221 Keep Alive: Not Supported 00:07:04.221 00:07:04.221 NVM Command Set Attributes 00:07:04.221 ========================== 00:07:04.221 Submission Queue Entry Size 00:07:04.221 Max: 64 00:07:04.221 Min: 64 00:07:04.221 Completion Queue Entry Size 00:07:04.221 Max: 16 00:07:04.221 Min: 16 00:07:04.221 Number of Namespaces: 256 00:07:04.221 Compare Command: Supported 00:07:04.221 Write Uncorrectable Command: Not Supported 00:07:04.221 Dataset Management Command: Supported 00:07:04.221 Write Zeroes Command: Supported 00:07:04.221 Set Features Save Field: Supported 00:07:04.221 Reservations: Not Supported 00:07:04.221 Timestamp: Supported 00:07:04.221 Copy: Supported 00:07:04.221 Volatile Write Cache: Present 00:07:04.221 Atomic Write Unit (Normal): 1 00:07:04.221 Atomic Write Unit (PFail): 1 00:07:04.221 Atomic Compare & Write Unit: 1 00:07:04.221 Fused Compare & Write: Not Supported 00:07:04.221 Scatter-Gather List 00:07:04.221 SGL Command Set: Supported 00:07:04.221 SGL Keyed: Not Supported 00:07:04.221 SGL Bit Bucket Descriptor: Not Supported 00:07:04.221 SGL Metadata Pointer: Not Supported 00:07:04.221 Oversized SGL: Not Supported 00:07:04.221 SGL Metadata Address: Not Supported 00:07:04.221 SGL Offset: Not Supported 00:07:04.221 Transport SGL Data Block: Not Supported 00:07:04.221 Replay Protected Memory Block: Not Supported 00:07:04.221 00:07:04.221 Firmware Slot Information 00:07:04.221 ========================= 00:07:04.221 Active slot: 1 00:07:04.221 Slot 1 Firmware Revision: 1.0 00:07:04.221 00:07:04.221 00:07:04.221 Commands Supported and Effects 00:07:04.221 ============================== 00:07:04.221 Admin Commands 00:07:04.221 -------------- 00:07:04.221 Delete I/O Submission Queue (00h): Supported 00:07:04.221 Create I/O Submission Queue (01h): Supported 00:07:04.221 Get Log Page (02h): Supported 00:07:04.221 Delete I/O Completion Queue (04h): Supported 00:07:04.221 Create I/O Completion Queue (05h): Supported 00:07:04.221 Identify (06h): Supported 00:07:04.221 Abort (08h): Supported 00:07:04.221 Set Features (09h): Supported 00:07:04.221 Get Features (0Ah): Supported 00:07:04.221 Asynchronous Event Request (0Ch): Supported 00:07:04.221 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.221 Directive Send (19h): Supported 00:07:04.221 Directive Receive (1Ah): Supported 00:07:04.221 Virtualization Management (1Ch): Supported 00:07:04.221 Doorbell Buffer Config (7Ch): Supported 00:07:04.221 Format NVM (80h): Supported LBA-Change 00:07:04.221 I/O Commands 00:07:04.221 ------------ 00:07:04.221 Flush (00h): Supported LBA-Change 00:07:04.221 Write (01h): Supported LBA-Change 00:07:04.221 Read (02h): Supported 00:07:04.221 Compare (05h): Supported 00:07:04.221 Write Zeroes (08h): Supported LBA-Change 00:07:04.221 Dataset Management (09h): Supported LBA-Change 00:07:04.221 Unknown (0Ch): Supported 00:07:04.221 Unknown (12h): Supported 00:07:04.221 Copy (19h): Supported LBA-Change 00:07:04.221 Unknown (1Dh): Supported LBA-Change 00:07:04.221 00:07:04.221 Error Log 00:07:04.221 ========= 00:07:04.221 00:07:04.221 Arbitration 00:07:04.221 =========== 00:07:04.221 Arbitration Burst: no limit 00:07:04.221 00:07:04.221 Power Management 00:07:04.221 ================ 00:07:04.221 Number of Power States: 1 00:07:04.222 Current Power State: Power State #0 00:07:04.222 Power State #0: 00:07:04.222 Max Power: 25.00 W 00:07:04.222 Non-Operational State: Operational 00:07:04.222 Entry Latency: 16 microseconds 00:07:04.222 Exit Latency: 4 microseconds 00:07:04.222 Relative Read Throughput: 0 00:07:04.222 Relative Read Latency: 0 00:07:04.222 Relative Write Throughput: 0 00:07:04.222 Relative Write Latency: 0 00:07:04.222 Idle Power: Not Reported 00:07:04.222 Active Power: Not Reported 00:07:04.222 Non-Operational Permissive Mode: Not Supported 00:07:04.222 00:07:04.222 Health Information 00:07:04.222 ================== 00:07:04.222 Critical Warnings: 00:07:04.222 Available Spare Space: OK 00:07:04.222 Temperature: OK 00:07:04.222 Device Reliability: OK 00:07:04.222 Read Only: No 00:07:04.222 Volatile Memory Backup: OK 00:07:04.222 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.222 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.222 Available Spare: 0% 00:07:04.222 Available Spare Threshold: 0% 00:07:04.222 Life Percentage Used: 0% 00:07:04.222 Data Units Read: 778 00:07:04.222 Data Units Written: 708 00:07:04.222 Host Read Commands: 35619 00:07:04.222 Host Write Commands: 35042 00:07:04.222 Controller Busy Time: 0 minutes 00:07:04.222 Power Cycles: 0 00:07:04.222 Power On Hours: 0 hours 00:07:04.222 Unsafe Shutdowns: 0 00:07:04.222 Unrecoverable Media Errors: 0 00:07:04.222 Lifetime Error Log Entries: 0 00:07:04.222 Warning Temperature Time: 0 minutes 00:07:04.222 Critical Temperature Time: 0 minutes 00:07:04.222 00:07:04.222 Number of Queues 00:07:04.222 ================ 00:07:04.222 Number of I/O Submission Queues: 64 00:07:04.222 Number of I/O Completion Queues: 64 00:07:04.222 00:07:04.222 ZNS Specific Controller Data 00:07:04.222 ============================ 00:07:04.222 Zone Append Size Limit: 0 00:07:04.222 00:07:04.222 00:07:04.222 Active Namespaces 00:07:04.222 ================= 00:07:04.222 Namespace ID:1 00:07:04.222 Error Recovery Timeout: Unlimited 00:07:04.222 Command Set Identifier: NVM (00h) 00:07:04.222 Deallocate: Supported 00:07:04.222 Deallocated/Unwritten Error: Supported 00:07:04.222 Deallocated Read Value: All 0x00 00:07:04.222 Deallocate in Write Zeroes: Not Supported 00:07:04.222 Deallocated Guard Field: 0xFFFF 00:07:04.222 Flush: Supported 00:07:04.222 Reservation: Not Supported 00:07:04.222 Namespace Sharing Capabilities: Multiple Controllers 00:07:04.222 Size (in LBAs): 262144 (1GiB) 00:07:04.222 Capacity (in LBAs): 262144 (1GiB) 00:07:04.222 Utilization (in LBAs): 262144 (1GiB) 00:07:04.222 Thin Provisioning: Not Supported 00:07:04.222 Per-NS Atomic Units: No 00:07:04.222 Maximum Single Source Range Length: 128 00:07:04.222 Maximum Copy Length: 128 00:07:04.222 Maximum Source Range Count: 128 00:07:04.222 NGUID/EUI64 Never Reused: No 00:07:04.222 Namespace Write Protected: No 00:07:04.222 Endurance group ID: 1 00:07:04.222 Number of LBA Formats: 8 00:07:04.222 Current LBA Format: LBA Format #04 00:07:04.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.222 00:07:04.222 Get Feature FDP: 00:07:04.222 ================ 00:07:04.222 Enabled: Yes 00:07:04.222 FDP configuration index: 0 00:07:04.222 00:07:04.222 FDP configurations log page 00:07:04.222 =========================== 00:07:04.222 Number of FDP configurations: 1 00:07:04.222 Version: 0 00:07:04.222 Size: 112 00:07:04.222 FDP Configuration Descriptor: 0 00:07:04.222 Descriptor Size: 96 00:07:04.222 Reclaim Group Identifier format: 2 00:07:04.222 FDP Volatile Write Cache: Not Present 00:07:04.222 FDP Configuration: Valid 00:07:04.222 Vendor Specific Size: 0 00:07:04.222 Number of Reclaim Groups: 2 00:07:04.222 Number of Recalim Unit Handles: 8 00:07:04.222 Max Placement Identifiers: 128 00:07:04.222 Number of Namespaces Suppprted: 256 00:07:04.222 Reclaim unit Nominal Size: 6000000 bytes 00:07:04.222 Estimated Reclaim Unit Time Limit: Not Reported 00:07:04.222 RUH Desc #000: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #001: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #002: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #003: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #004: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #005: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #006: RUH Type: Initially Isolated 00:07:04.222 RUH Desc #007: RUH Type: Initially Isolated 00:07:04.222 00:07:04.222 FDP reclaim unit handle usage log page 00:07:04.222 ====================================== 00:07:04.222 Number of Reclaim Unit Handles: 8 00:07:04.222 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:04.222 RUH Usage Desc #001: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #002: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #003: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #004: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #005: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #006: RUH Attributes: Unused 00:07:04.222 RUH Usage Desc #007: RUH Attributes: Unused 00:07:04.222 00:07:04.222 FDP statistics log page 00:07:04.222 ======================= 00:07:04.222 Host bytes with metadata written: 456368128 00:07:04.222 Media[2024-11-18 10:35:30.071904] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62748 terminated unexpected 00:07:04.222 bytes with metadata written: 456433664 00:07:04.222 Media bytes erased: 0 00:07:04.222 00:07:04.222 FDP events log page 00:07:04.222 =================== 00:07:04.222 Number of FDP events: 0 00:07:04.222 00:07:04.222 NVM Specific Namespace Data 00:07:04.222 =========================== 00:07:04.222 Logical Block Storage Tag Mask: 0 00:07:04.222 Protection Information Capabilities: 00:07:04.222 16b Guard Protection Information Storage Tag Support: No 00:07:04.222 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.222 Storage Tag Check Read Support: No 00:07:04.222 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.222 ===================================================== 00:07:04.222 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:04.222 ===================================================== 00:07:04.222 Controller Capabilities/Features 00:07:04.222 ================================ 00:07:04.222 Vendor ID: 1b36 00:07:04.222 Subsystem Vendor ID: 1af4 00:07:04.222 Serial Number: 12342 00:07:04.222 Model Number: QEMU NVMe Ctrl 00:07:04.222 Firmware Version: 8.0.0 00:07:04.222 Recommended Arb Burst: 6 00:07:04.222 IEEE OUI Identifier: 00 54 52 00:07:04.222 Multi-path I/O 00:07:04.222 May have multiple subsystem ports: No 00:07:04.222 May have multiple controllers: No 00:07:04.222 Associated with SR-IOV VF: No 00:07:04.222 Max Data Transfer Size: 524288 00:07:04.222 Max Number of Namespaces: 256 00:07:04.222 Max Number of I/O Queues: 64 00:07:04.223 NVMe Specification Version (VS): 1.4 00:07:04.223 NVMe Specification Version (Identify): 1.4 00:07:04.223 Maximum Queue Entries: 2048 00:07:04.223 Contiguous Queues Required: Yes 00:07:04.223 Arbitration Mechanisms Supported 00:07:04.223 Weighted Round Robin: Not Supported 00:07:04.223 Vendor Specific: Not Supported 00:07:04.223 Reset Timeout: 7500 ms 00:07:04.223 Doorbell Stride: 4 bytes 00:07:04.223 NVM Subsystem Reset: Not Supported 00:07:04.223 Command Sets Supported 00:07:04.223 NVM Command Set: Supported 00:07:04.223 Boot Partition: Not Supported 00:07:04.223 Memory Page Size Minimum: 4096 bytes 00:07:04.223 Memory Page Size Maximum: 65536 bytes 00:07:04.223 Persistent Memory Region: Not Supported 00:07:04.223 Optional Asynchronous Events Supported 00:07:04.223 Namespace Attribute Notices: Supported 00:07:04.223 Firmware Activation Notices: Not Supported 00:07:04.223 ANA Change Notices: Not Supported 00:07:04.223 PLE Aggregate Log Change Notices: Not Supported 00:07:04.223 LBA Status Info Alert Notices: Not Supported 00:07:04.223 EGE Aggregate Log Change Notices: Not Supported 00:07:04.223 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.223 Zone Descriptor Change Notices: Not Supported 00:07:04.223 Discovery Log Change Notices: Not Supported 00:07:04.223 Controller Attributes 00:07:04.223 128-bit Host Identifier: Not Supported 00:07:04.223 Non-Operational Permissive Mode: Not Supported 00:07:04.223 NVM Sets: Not Supported 00:07:04.223 Read Recovery Levels: Not Supported 00:07:04.223 Endurance Groups: Not Supported 00:07:04.223 Predictable Latency Mode: Not Supported 00:07:04.223 Traffic Based Keep ALive: Not Supported 00:07:04.223 Namespace Granularity: Not Supported 00:07:04.223 SQ Associations: Not Supported 00:07:04.223 UUID List: Not Supported 00:07:04.223 Multi-Domain Subsystem: Not Supported 00:07:04.223 Fixed Capacity Management: Not Supported 00:07:04.223 Variable Capacity Management: Not Supported 00:07:04.223 Delete Endurance Group: Not Supported 00:07:04.223 Delete NVM Set: Not Supported 00:07:04.223 Extended LBA Formats Supported: Supported 00:07:04.223 Flexible Data Placement Supported: Not Supported 00:07:04.223 00:07:04.223 Controller Memory Buffer Support 00:07:04.223 ================================ 00:07:04.223 Supported: No 00:07:04.223 00:07:04.223 Persistent Memory Region Support 00:07:04.223 ================================ 00:07:04.223 Supported: No 00:07:04.223 00:07:04.223 Admin Command Set Attributes 00:07:04.223 ============================ 00:07:04.223 Security Send/Receive: Not Supported 00:07:04.223 Format NVM: Supported 00:07:04.223 Firmware Activate/Download: Not Supported 00:07:04.223 Namespace Management: Supported 00:07:04.223 Device Self-Test: Not Supported 00:07:04.223 Directives: Supported 00:07:04.223 NVMe-MI: Not Supported 00:07:04.223 Virtualization Management: Not Supported 00:07:04.223 Doorbell Buffer Config: Supported 00:07:04.223 Get LBA Status Capability: Not Supported 00:07:04.223 Command & Feature Lockdown Capability: Not Supported 00:07:04.223 Abort Command Limit: 4 00:07:04.223 Async Event Request Limit: 4 00:07:04.223 Number of Firmware Slots: N/A 00:07:04.223 Firmware Slot 1 Read-Only: N/A 00:07:04.223 Firmware Activation Without Reset: N/A 00:07:04.223 Multiple Update Detection Support: N/A 00:07:04.223 Firmware Update Granularity: No Information Provided 00:07:04.223 Per-Namespace SMART Log: Yes 00:07:04.223 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.223 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:04.223 Command Effects Log Page: Supported 00:07:04.223 Get Log Page Extended Data: Supported 00:07:04.223 Telemetry Log Pages: Not Supported 00:07:04.223 Persistent Event Log Pages: Not Supported 00:07:04.223 Supported Log Pages Log Page: May Support 00:07:04.223 Commands Supported & Effects Log Page: Not Supported 00:07:04.223 Feature Identifiers & Effects Log Page:May Support 00:07:04.223 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.223 Data Area 4 for Telemetry Log: Not Supported 00:07:04.223 Error Log Page Entries Supported: 1 00:07:04.223 Keep Alive: Not Supported 00:07:04.223 00:07:04.223 NVM Command Set Attributes 00:07:04.223 ========================== 00:07:04.223 Submission Queue Entry Size 00:07:04.223 Max: 64 00:07:04.223 Min: 64 00:07:04.223 Completion Queue Entry Size 00:07:04.223 Max: 16 00:07:04.223 Min: 16 00:07:04.223 Number of Namespaces: 256 00:07:04.223 Compare Command: Supported 00:07:04.223 Write Uncorrectable Command: Not Supported 00:07:04.223 Dataset Management Command: Supported 00:07:04.223 Write Zeroes Command: Supported 00:07:04.223 Set Features Save Field: Supported 00:07:04.223 Reservations: Not Supported 00:07:04.223 Timestamp: Supported 00:07:04.223 Copy: Supported 00:07:04.223 Volatile Write Cache: Present 00:07:04.223 Atomic Write Unit (Normal): 1 00:07:04.223 Atomic Write Unit (PFail): 1 00:07:04.223 Atomic Compare & Write Unit: 1 00:07:04.223 Fused Compare & Write: Not Supported 00:07:04.223 Scatter-Gather List 00:07:04.223 SGL Command Set: Supported 00:07:04.223 SGL Keyed: Not Supported 00:07:04.223 SGL Bit Bucket Descriptor: Not Supported 00:07:04.223 SGL Metadata Pointer: Not Supported 00:07:04.223 Oversized SGL: Not Supported 00:07:04.223 SGL Metadata Address: Not Supported 00:07:04.223 SGL Offset: Not Supported 00:07:04.223 Transport SGL Data Block: Not Supported 00:07:04.223 Replay Protected Memory Block: Not Supported 00:07:04.223 00:07:04.223 Firmware Slot Information 00:07:04.223 ========================= 00:07:04.223 Active slot: 1 00:07:04.223 Slot 1 Firmware Revision: 1.0 00:07:04.223 00:07:04.223 00:07:04.223 Commands Supported and Effects 00:07:04.223 ============================== 00:07:04.223 Admin Commands 00:07:04.223 -------------- 00:07:04.223 Delete I/O Submission Queue (00h): Supported 00:07:04.223 Create I/O Submission Queue (01h): Supported 00:07:04.223 Get Log Page (02h): Supported 00:07:04.223 Delete I/O Completion Queue (04h): Supported 00:07:04.223 Create I/O Completion Queue (05h): Supported 00:07:04.223 Identify (06h): Supported 00:07:04.223 Abort (08h): Supported 00:07:04.223 Set Features (09h): Supported 00:07:04.223 Get Features (0Ah): Supported 00:07:04.223 Asynchronous Event Request (0Ch): Supported 00:07:04.223 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.223 Directive Send (19h): Supported 00:07:04.223 Directive Receive (1Ah): Supported 00:07:04.223 Virtualization Management (1Ch): Supported 00:07:04.223 Doorbell Buffer Config (7Ch): Supported 00:07:04.223 Format NVM (80h): Supported LBA-Change 00:07:04.223 I/O Commands 00:07:04.223 ------------ 00:07:04.223 Flush (00h): Supported LBA-Change 00:07:04.223 Write (01h): Supported LBA-Change 00:07:04.223 Read (02h): Supported 00:07:04.223 Compare (05h): Supported 00:07:04.223 Write Zeroes (08h): Supported LBA-Change 00:07:04.223 Dataset Management (09h): Supported LBA-Change 00:07:04.223 Unknown (0Ch): Supported 00:07:04.223 Unknown (12h): Supported 00:07:04.223 Copy (19h): Supported LBA-Change 00:07:04.223 Unknown (1Dh): Supported LBA-Change 00:07:04.223 00:07:04.223 Error Log 00:07:04.223 ========= 00:07:04.223 00:07:04.223 Arbitration 00:07:04.223 =========== 00:07:04.223 Arbitration Burst: no limit 00:07:04.223 00:07:04.223 Power Management 00:07:04.223 ================ 00:07:04.223 Number of Power States: 1 00:07:04.223 Current Power State: Power State #0 00:07:04.223 Power State #0: 00:07:04.223 Max Power: 25.00 W 00:07:04.223 Non-Operational State: Operational 00:07:04.223 Entry Latency: 16 microseconds 00:07:04.223 Exit Latency: 4 microseconds 00:07:04.223 Relative Read Throughput: 0 00:07:04.223 Relative Read Latency: 0 00:07:04.223 Relative Write Throughput: 0 00:07:04.223 Relative Write Latency: 0 00:07:04.223 Idle Power: Not Reported 00:07:04.223 Active Power: Not Reported 00:07:04.223 Non-Operational Permissive Mode: Not Supported 00:07:04.223 00:07:04.223 Health Information 00:07:04.223 ================== 00:07:04.223 Critical Warnings: 00:07:04.223 Available Spare Space: OK 00:07:04.223 Temperature: OK 00:07:04.223 Device Reliability: OK 00:07:04.223 Read Only: No 00:07:04.224 Volatile Memory Backup: OK 00:07:04.224 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.224 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.224 Available Spare: 0% 00:07:04.224 Available Spare Threshold: 0% 00:07:04.224 Life Percentage Used: 0% 00:07:04.224 Data Units Read: 2160 00:07:04.224 Data Units Written: 1947 00:07:04.224 Host Read Commands: 105072 00:07:04.224 Host Write Commands: 103342 00:07:04.224 Controller Busy Time: 0 minutes 00:07:04.224 Power Cycles: 0 00:07:04.224 Power On Hours: 0 hours 00:07:04.224 Unsafe Shutdowns: 0 00:07:04.224 Unrecoverable Media Errors: 0 00:07:04.224 Lifetime Error Log Entries: 0 00:07:04.224 Warning Temperature Time: 0 minutes 00:07:04.224 Critical Temperature Time: 0 minutes 00:07:04.224 00:07:04.224 Number of Queues 00:07:04.224 ================ 00:07:04.224 Number of I/O Submission Queues: 64 00:07:04.224 Number of I/O Completion Queues: 64 00:07:04.224 00:07:04.224 ZNS Specific Controller Data 00:07:04.224 ============================ 00:07:04.224 Zone Append Size Limit: 0 00:07:04.224 00:07:04.224 00:07:04.224 Active Namespaces 00:07:04.224 ================= 00:07:04.224 Namespace ID:1 00:07:04.224 Error Recovery Timeout: Unlimited 00:07:04.224 Command Set Identifier: NVM (00h) 00:07:04.224 Deallocate: Supported 00:07:04.224 Deallocated/Unwritten Error: Supported 00:07:04.224 Deallocated Read Value: All 0x00 00:07:04.224 Deallocate in Write Zeroes: Not Supported 00:07:04.224 Deallocated Guard Field: 0xFFFF 00:07:04.224 Flush: Supported 00:07:04.224 Reservation: Not Supported 00:07:04.224 Namespace Sharing Capabilities: Private 00:07:04.224 Size (in LBAs): 1048576 (4GiB) 00:07:04.224 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.224 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.224 Thin Provisioning: Not Supported 00:07:04.224 Per-NS Atomic Units: No 00:07:04.224 Maximum Single Source Range Length: 128 00:07:04.224 Maximum Copy Length: 128 00:07:04.224 Maximum Source Range Count: 128 00:07:04.224 NGUID/EUI64 Never Reused: No 00:07:04.224 Namespace Write Protected: No 00:07:04.224 Number of LBA Formats: 8 00:07:04.224 Current LBA Format: LBA Format #04 00:07:04.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.224 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.224 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.224 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.224 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.224 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.224 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.224 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.224 00:07:04.224 NVM Specific Namespace Data 00:07:04.224 =========================== 00:07:04.224 Logical Block Storage Tag Mask: 0 00:07:04.224 Protection Information Capabilities: 00:07:04.224 16b Guard Protection Information Storage Tag Support: No 00:07:04.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.224 Storage Tag Check Read Support: No 00:07:04.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Namespace ID:2 00:07:04.224 Error Recovery Timeout: Unlimited 00:07:04.224 Command Set Identifier: NVM (00h) 00:07:04.224 Deallocate: Supported 00:07:04.224 Deallocated/Unwritten Error: Supported 00:07:04.224 Deallocated Read Value: All 0x00 00:07:04.224 Deallocate in Write Zeroes: Not Supported 00:07:04.224 Deallocated Guard Field: 0xFFFF 00:07:04.224 Flush: Supported 00:07:04.224 Reservation: Not Supported 00:07:04.224 Namespace Sharing Capabilities: Private 00:07:04.224 Size (in LBAs): 1048576 (4GiB) 00:07:04.224 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.224 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.224 Thin Provisioning: Not Supported 00:07:04.224 Per-NS Atomic Units: No 00:07:04.224 Maximum Single Source Range Length: 128 00:07:04.224 Maximum Copy Length: 128 00:07:04.224 Maximum Source Range Count: 128 00:07:04.224 NGUID/EUI64 Never Reused: No 00:07:04.224 Namespace Write Protected: No 00:07:04.224 Number of LBA Formats: 8 00:07:04.224 Current LBA Format: LBA Format #04 00:07:04.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.224 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.224 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.224 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.224 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.224 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.224 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.224 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.224 00:07:04.224 NVM Specific Namespace Data 00:07:04.224 =========================== 00:07:04.224 Logical Block Storage Tag Mask: 0 00:07:04.224 Protection Information Capabilities: 00:07:04.224 16b Guard Protection Information Storage Tag Support: No 00:07:04.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.224 Storage Tag Check Read Support: No 00:07:04.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Namespace ID:3 00:07:04.224 Error Recovery Timeout: Unlimited 00:07:04.224 Command Set Identifier: NVM (00h) 00:07:04.224 Deallocate: Supported 00:07:04.224 Deallocated/Unwritten Error: Supported 00:07:04.224 Deallocated Read Value: All 0x00 00:07:04.224 Deallocate in Write Zeroes: Not Supported 00:07:04.224 Deallocated Guard Field: 0xFFFF 00:07:04.224 Flush: Supported 00:07:04.224 Reservation: Not Supported 00:07:04.224 Namespace Sharing Capabilities: Private 00:07:04.224 Size (in LBAs): 1048576 (4GiB) 00:07:04.224 Capacity (in LBAs): 1048576 (4GiB) 00:07:04.224 Utilization (in LBAs): 1048576 (4GiB) 00:07:04.224 Thin Provisioning: Not Supported 00:07:04.224 Per-NS Atomic Units: No 00:07:04.224 Maximum Single Source Range Length: 128 00:07:04.224 Maximum Copy Length: 128 00:07:04.224 Maximum Source Range Count: 128 00:07:04.224 NGUID/EUI64 Never Reused: No 00:07:04.224 Namespace Write Protected: No 00:07:04.224 Number of LBA Formats: 8 00:07:04.224 Current LBA Format: LBA Format #04 00:07:04.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.224 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.224 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.224 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.224 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.224 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.224 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.224 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.224 00:07:04.224 NVM Specific Namespace Data 00:07:04.224 =========================== 00:07:04.224 Logical Block Storage Tag Mask: 0 00:07:04.224 Protection Information Capabilities: 00:07:04.224 16b Guard Protection Information Storage Tag Support: No 00:07:04.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.224 Storage Tag Check Read Support: No 00:07:04.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.483 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.483 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:04.483 ===================================================== 00:07:04.483 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:04.483 ===================================================== 00:07:04.483 Controller Capabilities/Features 00:07:04.483 ================================ 00:07:04.483 Vendor ID: 1b36 00:07:04.483 Subsystem Vendor ID: 1af4 00:07:04.483 Serial Number: 12340 00:07:04.483 Model Number: QEMU NVMe Ctrl 00:07:04.483 Firmware Version: 8.0.0 00:07:04.483 Recommended Arb Burst: 6 00:07:04.483 IEEE OUI Identifier: 00 54 52 00:07:04.483 Multi-path I/O 00:07:04.483 May have multiple subsystem ports: No 00:07:04.483 May have multiple controllers: No 00:07:04.483 Associated with SR-IOV VF: No 00:07:04.483 Max Data Transfer Size: 524288 00:07:04.483 Max Number of Namespaces: 256 00:07:04.483 Max Number of I/O Queues: 64 00:07:04.483 NVMe Specification Version (VS): 1.4 00:07:04.483 NVMe Specification Version (Identify): 1.4 00:07:04.483 Maximum Queue Entries: 2048 00:07:04.483 Contiguous Queues Required: Yes 00:07:04.483 Arbitration Mechanisms Supported 00:07:04.483 Weighted Round Robin: Not Supported 00:07:04.484 Vendor Specific: Not Supported 00:07:04.484 Reset Timeout: 7500 ms 00:07:04.484 Doorbell Stride: 4 bytes 00:07:04.484 NVM Subsystem Reset: Not Supported 00:07:04.484 Command Sets Supported 00:07:04.484 NVM Command Set: Supported 00:07:04.484 Boot Partition: Not Supported 00:07:04.484 Memory Page Size Minimum: 4096 bytes 00:07:04.484 Memory Page Size Maximum: 65536 bytes 00:07:04.484 Persistent Memory Region: Not Supported 00:07:04.484 Optional Asynchronous Events Supported 00:07:04.484 Namespace Attribute Notices: Supported 00:07:04.484 Firmware Activation Notices: Not Supported 00:07:04.484 ANA Change Notices: Not Supported 00:07:04.484 PLE Aggregate Log Change Notices: Not Supported 00:07:04.484 LBA Status Info Alert Notices: Not Supported 00:07:04.484 EGE Aggregate Log Change Notices: Not Supported 00:07:04.484 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.484 Zone Descriptor Change Notices: Not Supported 00:07:04.484 Discovery Log Change Notices: Not Supported 00:07:04.484 Controller Attributes 00:07:04.484 128-bit Host Identifier: Not Supported 00:07:04.484 Non-Operational Permissive Mode: Not Supported 00:07:04.484 NVM Sets: Not Supported 00:07:04.484 Read Recovery Levels: Not Supported 00:07:04.484 Endurance Groups: Not Supported 00:07:04.484 Predictable Latency Mode: Not Supported 00:07:04.484 Traffic Based Keep ALive: Not Supported 00:07:04.484 Namespace Granularity: Not Supported 00:07:04.484 SQ Associations: Not Supported 00:07:04.484 UUID List: Not Supported 00:07:04.484 Multi-Domain Subsystem: Not Supported 00:07:04.484 Fixed Capacity Management: Not Supported 00:07:04.484 Variable Capacity Management: Not Supported 00:07:04.484 Delete Endurance Group: Not Supported 00:07:04.484 Delete NVM Set: Not Supported 00:07:04.484 Extended LBA Formats Supported: Supported 00:07:04.484 Flexible Data Placement Supported: Not Supported 00:07:04.484 00:07:04.484 Controller Memory Buffer Support 00:07:04.484 ================================ 00:07:04.484 Supported: No 00:07:04.484 00:07:04.484 Persistent Memory Region Support 00:07:04.484 ================================ 00:07:04.484 Supported: No 00:07:04.484 00:07:04.484 Admin Command Set Attributes 00:07:04.484 ============================ 00:07:04.484 Security Send/Receive: Not Supported 00:07:04.484 Format NVM: Supported 00:07:04.484 Firmware Activate/Download: Not Supported 00:07:04.484 Namespace Management: Supported 00:07:04.484 Device Self-Test: Not Supported 00:07:04.484 Directives: Supported 00:07:04.484 NVMe-MI: Not Supported 00:07:04.484 Virtualization Management: Not Supported 00:07:04.484 Doorbell Buffer Config: Supported 00:07:04.484 Get LBA Status Capability: Not Supported 00:07:04.484 Command & Feature Lockdown Capability: Not Supported 00:07:04.484 Abort Command Limit: 4 00:07:04.484 Async Event Request Limit: 4 00:07:04.484 Number of Firmware Slots: N/A 00:07:04.484 Firmware Slot 1 Read-Only: N/A 00:07:04.484 Firmware Activation Without Reset: N/A 00:07:04.484 Multiple Update Detection Support: N/A 00:07:04.484 Firmware Update Granularity: No Information Provided 00:07:04.484 Per-Namespace SMART Log: Yes 00:07:04.484 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.484 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:04.484 Command Effects Log Page: Supported 00:07:04.484 Get Log Page Extended Data: Supported 00:07:04.484 Telemetry Log Pages: Not Supported 00:07:04.484 Persistent Event Log Pages: Not Supported 00:07:04.484 Supported Log Pages Log Page: May Support 00:07:04.484 Commands Supported & Effects Log Page: Not Supported 00:07:04.484 Feature Identifiers & Effects Log Page:May Support 00:07:04.484 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.484 Data Area 4 for Telemetry Log: Not Supported 00:07:04.484 Error Log Page Entries Supported: 1 00:07:04.484 Keep Alive: Not Supported 00:07:04.484 00:07:04.484 NVM Command Set Attributes 00:07:04.484 ========================== 00:07:04.484 Submission Queue Entry Size 00:07:04.484 Max: 64 00:07:04.484 Min: 64 00:07:04.484 Completion Queue Entry Size 00:07:04.484 Max: 16 00:07:04.484 Min: 16 00:07:04.484 Number of Namespaces: 256 00:07:04.484 Compare Command: Supported 00:07:04.484 Write Uncorrectable Command: Not Supported 00:07:04.484 Dataset Management Command: Supported 00:07:04.484 Write Zeroes Command: Supported 00:07:04.484 Set Features Save Field: Supported 00:07:04.484 Reservations: Not Supported 00:07:04.484 Timestamp: Supported 00:07:04.484 Copy: Supported 00:07:04.484 Volatile Write Cache: Present 00:07:04.484 Atomic Write Unit (Normal): 1 00:07:04.484 Atomic Write Unit (PFail): 1 00:07:04.484 Atomic Compare & Write Unit: 1 00:07:04.484 Fused Compare & Write: Not Supported 00:07:04.484 Scatter-Gather List 00:07:04.484 SGL Command Set: Supported 00:07:04.484 SGL Keyed: Not Supported 00:07:04.484 SGL Bit Bucket Descriptor: Not Supported 00:07:04.484 SGL Metadata Pointer: Not Supported 00:07:04.484 Oversized SGL: Not Supported 00:07:04.484 SGL Metadata Address: Not Supported 00:07:04.484 SGL Offset: Not Supported 00:07:04.484 Transport SGL Data Block: Not Supported 00:07:04.484 Replay Protected Memory Block: Not Supported 00:07:04.484 00:07:04.484 Firmware Slot Information 00:07:04.484 ========================= 00:07:04.484 Active slot: 1 00:07:04.484 Slot 1 Firmware Revision: 1.0 00:07:04.484 00:07:04.484 00:07:04.484 Commands Supported and Effects 00:07:04.484 ============================== 00:07:04.484 Admin Commands 00:07:04.484 -------------- 00:07:04.484 Delete I/O Submission Queue (00h): Supported 00:07:04.484 Create I/O Submission Queue (01h): Supported 00:07:04.484 Get Log Page (02h): Supported 00:07:04.484 Delete I/O Completion Queue (04h): Supported 00:07:04.484 Create I/O Completion Queue (05h): Supported 00:07:04.484 Identify (06h): Supported 00:07:04.484 Abort (08h): Supported 00:07:04.484 Set Features (09h): Supported 00:07:04.484 Get Features (0Ah): Supported 00:07:04.484 Asynchronous Event Request (0Ch): Supported 00:07:04.484 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.484 Directive Send (19h): Supported 00:07:04.484 Directive Receive (1Ah): Supported 00:07:04.484 Virtualization Management (1Ch): Supported 00:07:04.484 Doorbell Buffer Config (7Ch): Supported 00:07:04.484 Format NVM (80h): Supported LBA-Change 00:07:04.484 I/O Commands 00:07:04.484 ------------ 00:07:04.484 Flush (00h): Supported LBA-Change 00:07:04.484 Write (01h): Supported LBA-Change 00:07:04.484 Read (02h): Supported 00:07:04.484 Compare (05h): Supported 00:07:04.484 Write Zeroes (08h): Supported LBA-Change 00:07:04.484 Dataset Management (09h): Supported LBA-Change 00:07:04.484 Unknown (0Ch): Supported 00:07:04.484 Unknown (12h): Supported 00:07:04.484 Copy (19h): Supported LBA-Change 00:07:04.484 Unknown (1Dh): Supported LBA-Change 00:07:04.484 00:07:04.484 Error Log 00:07:04.484 ========= 00:07:04.484 00:07:04.484 Arbitration 00:07:04.484 =========== 00:07:04.484 Arbitration Burst: no limit 00:07:04.484 00:07:04.484 Power Management 00:07:04.484 ================ 00:07:04.485 Number of Power States: 1 00:07:04.485 Current Power State: Power State #0 00:07:04.485 Power State #0: 00:07:04.485 Max Power: 25.00 W 00:07:04.485 Non-Operational State: Operational 00:07:04.485 Entry Latency: 16 microseconds 00:07:04.485 Exit Latency: 4 microseconds 00:07:04.485 Relative Read Throughput: 0 00:07:04.485 Relative Read Latency: 0 00:07:04.485 Relative Write Throughput: 0 00:07:04.485 Relative Write Latency: 0 00:07:04.485 Idle Power: Not Reported 00:07:04.485 Active Power: Not Reported 00:07:04.485 Non-Operational Permissive Mode: Not Supported 00:07:04.485 00:07:04.485 Health Information 00:07:04.485 ================== 00:07:04.485 Critical Warnings: 00:07:04.485 Available Spare Space: OK 00:07:04.485 Temperature: OK 00:07:04.485 Device Reliability: OK 00:07:04.485 Read Only: No 00:07:04.485 Volatile Memory Backup: OK 00:07:04.485 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.485 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.485 Available Spare: 0% 00:07:04.485 Available Spare Threshold: 0% 00:07:04.485 Life Percentage Used: 0% 00:07:04.485 Data Units Read: 656 00:07:04.485 Data Units Written: 584 00:07:04.485 Host Read Commands: 34306 00:07:04.485 Host Write Commands: 34092 00:07:04.485 Controller Busy Time: 0 minutes 00:07:04.485 Power Cycles: 0 00:07:04.485 Power On Hours: 0 hours 00:07:04.485 Unsafe Shutdowns: 0 00:07:04.485 Unrecoverable Media Errors: 0 00:07:04.485 Lifetime Error Log Entries: 0 00:07:04.485 Warning Temperature Time: 0 minutes 00:07:04.485 Critical Temperature Time: 0 minutes 00:07:04.485 00:07:04.485 Number of Queues 00:07:04.485 ================ 00:07:04.485 Number of I/O Submission Queues: 64 00:07:04.485 Number of I/O Completion Queues: 64 00:07:04.485 00:07:04.485 ZNS Specific Controller Data 00:07:04.485 ============================ 00:07:04.485 Zone Append Size Limit: 0 00:07:04.485 00:07:04.485 00:07:04.485 Active Namespaces 00:07:04.485 ================= 00:07:04.485 Namespace ID:1 00:07:04.485 Error Recovery Timeout: Unlimited 00:07:04.485 Command Set Identifier: NVM (00h) 00:07:04.485 Deallocate: Supported 00:07:04.485 Deallocated/Unwritten Error: Supported 00:07:04.485 Deallocated Read Value: All 0x00 00:07:04.485 Deallocate in Write Zeroes: Not Supported 00:07:04.485 Deallocated Guard Field: 0xFFFF 00:07:04.485 Flush: Supported 00:07:04.485 Reservation: Not Supported 00:07:04.485 Metadata Transferred as: Separate Metadata Buffer 00:07:04.485 Namespace Sharing Capabilities: Private 00:07:04.485 Size (in LBAs): 1548666 (5GiB) 00:07:04.485 Capacity (in LBAs): 1548666 (5GiB) 00:07:04.485 Utilization (in LBAs): 1548666 (5GiB) 00:07:04.485 Thin Provisioning: Not Supported 00:07:04.485 Per-NS Atomic Units: No 00:07:04.485 Maximum Single Source Range Length: 128 00:07:04.485 Maximum Copy Length: 128 00:07:04.485 Maximum Source Range Count: 128 00:07:04.485 NGUID/EUI64 Never Reused: No 00:07:04.485 Namespace Write Protected: No 00:07:04.485 Number of LBA Formats: 8 00:07:04.485 Current LBA Format: LBA Format #07 00:07:04.485 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.485 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.485 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.485 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.485 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.485 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.485 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.485 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.485 00:07:04.485 NVM Specific Namespace Data 00:07:04.485 =========================== 00:07:04.485 Logical Block Storage Tag Mask: 0 00:07:04.485 Protection Information Capabilities: 00:07:04.485 16b Guard Protection Information Storage Tag Support: No 00:07:04.485 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.485 Storage Tag Check Read Support: No 00:07:04.485 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.485 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.485 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:04.744 ===================================================== 00:07:04.744 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:04.744 ===================================================== 00:07:04.744 Controller Capabilities/Features 00:07:04.744 ================================ 00:07:04.744 Vendor ID: 1b36 00:07:04.744 Subsystem Vendor ID: 1af4 00:07:04.744 Serial Number: 12341 00:07:04.744 Model Number: QEMU NVMe Ctrl 00:07:04.744 Firmware Version: 8.0.0 00:07:04.744 Recommended Arb Burst: 6 00:07:04.744 IEEE OUI Identifier: 00 54 52 00:07:04.744 Multi-path I/O 00:07:04.744 May have multiple subsystem ports: No 00:07:04.744 May have multiple controllers: No 00:07:04.744 Associated with SR-IOV VF: No 00:07:04.744 Max Data Transfer Size: 524288 00:07:04.744 Max Number of Namespaces: 256 00:07:04.744 Max Number of I/O Queues: 64 00:07:04.744 NVMe Specification Version (VS): 1.4 00:07:04.744 NVMe Specification Version (Identify): 1.4 00:07:04.744 Maximum Queue Entries: 2048 00:07:04.744 Contiguous Queues Required: Yes 00:07:04.744 Arbitration Mechanisms Supported 00:07:04.744 Weighted Round Robin: Not Supported 00:07:04.744 Vendor Specific: Not Supported 00:07:04.744 Reset Timeout: 7500 ms 00:07:04.744 Doorbell Stride: 4 bytes 00:07:04.744 NVM Subsystem Reset: Not Supported 00:07:04.744 Command Sets Supported 00:07:04.744 NVM Command Set: Supported 00:07:04.744 Boot Partition: Not Supported 00:07:04.744 Memory Page Size Minimum: 4096 bytes 00:07:04.744 Memory Page Size Maximum: 65536 bytes 00:07:04.744 Persistent Memory Region: Not Supported 00:07:04.744 Optional Asynchronous Events Supported 00:07:04.744 Namespace Attribute Notices: Supported 00:07:04.744 Firmware Activation Notices: Not Supported 00:07:04.744 ANA Change Notices: Not Supported 00:07:04.744 PLE Aggregate Log Change Notices: Not Supported 00:07:04.744 LBA Status Info Alert Notices: Not Supported 00:07:04.744 EGE Aggregate Log Change Notices: Not Supported 00:07:04.744 Normal NVM Subsystem Shutdown event: Not Supported 00:07:04.744 Zone Descriptor Change Notices: Not Supported 00:07:04.744 Discovery Log Change Notices: Not Supported 00:07:04.744 Controller Attributes 00:07:04.744 128-bit Host Identifier: Not Supported 00:07:04.744 Non-Operational Permissive Mode: Not Supported 00:07:04.744 NVM Sets: Not Supported 00:07:04.744 Read Recovery Levels: Not Supported 00:07:04.744 Endurance Groups: Not Supported 00:07:04.744 Predictable Latency Mode: Not Supported 00:07:04.744 Traffic Based Keep ALive: Not Supported 00:07:04.744 Namespace Granularity: Not Supported 00:07:04.744 SQ Associations: Not Supported 00:07:04.744 UUID List: Not Supported 00:07:04.744 Multi-Domain Subsystem: Not Supported 00:07:04.744 Fixed Capacity Management: Not Supported 00:07:04.744 Variable Capacity Management: Not Supported 00:07:04.744 Delete Endurance Group: Not Supported 00:07:04.744 Delete NVM Set: Not Supported 00:07:04.744 Extended LBA Formats Supported: Supported 00:07:04.744 Flexible Data Placement Supported: Not Supported 00:07:04.744 00:07:04.744 Controller Memory Buffer Support 00:07:04.744 ================================ 00:07:04.744 Supported: No 00:07:04.744 00:07:04.744 Persistent Memory Region Support 00:07:04.744 ================================ 00:07:04.744 Supported: No 00:07:04.744 00:07:04.744 Admin Command Set Attributes 00:07:04.744 ============================ 00:07:04.744 Security Send/Receive: Not Supported 00:07:04.744 Format NVM: Supported 00:07:04.744 Firmware Activate/Download: Not Supported 00:07:04.744 Namespace Management: Supported 00:07:04.744 Device Self-Test: Not Supported 00:07:04.744 Directives: Supported 00:07:04.745 NVMe-MI: Not Supported 00:07:04.745 Virtualization Management: Not Supported 00:07:04.745 Doorbell Buffer Config: Supported 00:07:04.745 Get LBA Status Capability: Not Supported 00:07:04.745 Command & Feature Lockdown Capability: Not Supported 00:07:04.745 Abort Command Limit: 4 00:07:04.745 Async Event Request Limit: 4 00:07:04.745 Number of Firmware Slots: N/A 00:07:04.745 Firmware Slot 1 Read-Only: N/A 00:07:04.745 Firmware Activation Without Reset: N/A 00:07:04.745 Multiple Update Detection Support: N/A 00:07:04.745 Firmware Update Granularity: No Information Provided 00:07:04.745 Per-Namespace SMART Log: Yes 00:07:04.745 Asymmetric Namespace Access Log Page: Not Supported 00:07:04.745 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:04.745 Command Effects Log Page: Supported 00:07:04.745 Get Log Page Extended Data: Supported 00:07:04.745 Telemetry Log Pages: Not Supported 00:07:04.745 Persistent Event Log Pages: Not Supported 00:07:04.745 Supported Log Pages Log Page: May Support 00:07:04.745 Commands Supported & Effects Log Page: Not Supported 00:07:04.745 Feature Identifiers & Effects Log Page:May Support 00:07:04.745 NVMe-MI Commands & Effects Log Page: May Support 00:07:04.745 Data Area 4 for Telemetry Log: Not Supported 00:07:04.745 Error Log Page Entries Supported: 1 00:07:04.745 Keep Alive: Not Supported 00:07:04.745 00:07:04.745 NVM Command Set Attributes 00:07:04.745 ========================== 00:07:04.745 Submission Queue Entry Size 00:07:04.745 Max: 64 00:07:04.745 Min: 64 00:07:04.745 Completion Queue Entry Size 00:07:04.745 Max: 16 00:07:04.745 Min: 16 00:07:04.745 Number of Namespaces: 256 00:07:04.745 Compare Command: Supported 00:07:04.745 Write Uncorrectable Command: Not Supported 00:07:04.745 Dataset Management Command: Supported 00:07:04.745 Write Zeroes Command: Supported 00:07:04.745 Set Features Save Field: Supported 00:07:04.745 Reservations: Not Supported 00:07:04.745 Timestamp: Supported 00:07:04.745 Copy: Supported 00:07:04.745 Volatile Write Cache: Present 00:07:04.745 Atomic Write Unit (Normal): 1 00:07:04.745 Atomic Write Unit (PFail): 1 00:07:04.745 Atomic Compare & Write Unit: 1 00:07:04.745 Fused Compare & Write: Not Supported 00:07:04.745 Scatter-Gather List 00:07:04.745 SGL Command Set: Supported 00:07:04.745 SGL Keyed: Not Supported 00:07:04.745 SGL Bit Bucket Descriptor: Not Supported 00:07:04.745 SGL Metadata Pointer: Not Supported 00:07:04.745 Oversized SGL: Not Supported 00:07:04.745 SGL Metadata Address: Not Supported 00:07:04.745 SGL Offset: Not Supported 00:07:04.745 Transport SGL Data Block: Not Supported 00:07:04.745 Replay Protected Memory Block: Not Supported 00:07:04.745 00:07:04.745 Firmware Slot Information 00:07:04.745 ========================= 00:07:04.745 Active slot: 1 00:07:04.745 Slot 1 Firmware Revision: 1.0 00:07:04.745 00:07:04.745 00:07:04.745 Commands Supported and Effects 00:07:04.745 ============================== 00:07:04.745 Admin Commands 00:07:04.745 -------------- 00:07:04.745 Delete I/O Submission Queue (00h): Supported 00:07:04.745 Create I/O Submission Queue (01h): Supported 00:07:04.745 Get Log Page (02h): Supported 00:07:04.745 Delete I/O Completion Queue (04h): Supported 00:07:04.745 Create I/O Completion Queue (05h): Supported 00:07:04.745 Identify (06h): Supported 00:07:04.745 Abort (08h): Supported 00:07:04.745 Set Features (09h): Supported 00:07:04.745 Get Features (0Ah): Supported 00:07:04.745 Asynchronous Event Request (0Ch): Supported 00:07:04.745 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:04.745 Directive Send (19h): Supported 00:07:04.745 Directive Receive (1Ah): Supported 00:07:04.745 Virtualization Management (1Ch): Supported 00:07:04.745 Doorbell Buffer Config (7Ch): Supported 00:07:04.745 Format NVM (80h): Supported LBA-Change 00:07:04.745 I/O Commands 00:07:04.745 ------------ 00:07:04.745 Flush (00h): Supported LBA-Change 00:07:04.745 Write (01h): Supported LBA-Change 00:07:04.745 Read (02h): Supported 00:07:04.745 Compare (05h): Supported 00:07:04.745 Write Zeroes (08h): Supported LBA-Change 00:07:04.745 Dataset Management (09h): Supported LBA-Change 00:07:04.745 Unknown (0Ch): Supported 00:07:04.745 Unknown (12h): Supported 00:07:04.745 Copy (19h): Supported LBA-Change 00:07:04.745 Unknown (1Dh): Supported LBA-Change 00:07:04.745 00:07:04.745 Error Log 00:07:04.745 ========= 00:07:04.745 00:07:04.745 Arbitration 00:07:04.745 =========== 00:07:04.745 Arbitration Burst: no limit 00:07:04.745 00:07:04.745 Power Management 00:07:04.745 ================ 00:07:04.745 Number of Power States: 1 00:07:04.745 Current Power State: Power State #0 00:07:04.745 Power State #0: 00:07:04.745 Max Power: 25.00 W 00:07:04.745 Non-Operational State: Operational 00:07:04.745 Entry Latency: 16 microseconds 00:07:04.745 Exit Latency: 4 microseconds 00:07:04.745 Relative Read Throughput: 0 00:07:04.745 Relative Read Latency: 0 00:07:04.745 Relative Write Throughput: 0 00:07:04.745 Relative Write Latency: 0 00:07:04.745 Idle Power: Not Reported 00:07:04.745 Active Power: Not Reported 00:07:04.745 Non-Operational Permissive Mode: Not Supported 00:07:04.745 00:07:04.745 Health Information 00:07:04.745 ================== 00:07:04.745 Critical Warnings: 00:07:04.745 Available Spare Space: OK 00:07:04.745 Temperature: OK 00:07:04.745 Device Reliability: OK 00:07:04.745 Read Only: No 00:07:04.745 Volatile Memory Backup: OK 00:07:04.745 Current Temperature: 323 Kelvin (50 Celsius) 00:07:04.745 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:04.745 Available Spare: 0% 00:07:04.745 Available Spare Threshold: 0% 00:07:04.745 Life Percentage Used: 0% 00:07:04.745 Data Units Read: 986 00:07:04.745 Data Units Written: 853 00:07:04.745 Host Read Commands: 50417 00:07:04.745 Host Write Commands: 49211 00:07:04.745 Controller Busy Time: 0 minutes 00:07:04.745 Power Cycles: 0 00:07:04.745 Power On Hours: 0 hours 00:07:04.745 Unsafe Shutdowns: 0 00:07:04.745 Unrecoverable Media Errors: 0 00:07:04.745 Lifetime Error Log Entries: 0 00:07:04.745 Warning Temperature Time: 0 minutes 00:07:04.745 Critical Temperature Time: 0 minutes 00:07:04.745 00:07:04.745 Number of Queues 00:07:04.745 ================ 00:07:04.745 Number of I/O Submission Queues: 64 00:07:04.745 Number of I/O Completion Queues: 64 00:07:04.745 00:07:04.745 ZNS Specific Controller Data 00:07:04.745 ============================ 00:07:04.745 Zone Append Size Limit: 0 00:07:04.745 00:07:04.745 00:07:04.745 Active Namespaces 00:07:04.745 ================= 00:07:04.745 Namespace ID:1 00:07:04.745 Error Recovery Timeout: Unlimited 00:07:04.745 Command Set Identifier: NVM (00h) 00:07:04.745 Deallocate: Supported 00:07:04.745 Deallocated/Unwritten Error: Supported 00:07:04.745 Deallocated Read Value: All 0x00 00:07:04.745 Deallocate in Write Zeroes: Not Supported 00:07:04.745 Deallocated Guard Field: 0xFFFF 00:07:04.745 Flush: Supported 00:07:04.745 Reservation: Not Supported 00:07:04.745 Namespace Sharing Capabilities: Private 00:07:04.745 Size (in LBAs): 1310720 (5GiB) 00:07:04.745 Capacity (in LBAs): 1310720 (5GiB) 00:07:04.745 Utilization (in LBAs): 1310720 (5GiB) 00:07:04.745 Thin Provisioning: Not Supported 00:07:04.745 Per-NS Atomic Units: No 00:07:04.745 Maximum Single Source Range Length: 128 00:07:04.745 Maximum Copy Length: 128 00:07:04.745 Maximum Source Range Count: 128 00:07:04.745 NGUID/EUI64 Never Reused: No 00:07:04.745 Namespace Write Protected: No 00:07:04.745 Number of LBA Formats: 8 00:07:04.745 Current LBA Format: LBA Format #04 00:07:04.745 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:04.745 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:04.745 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:04.745 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:04.745 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:04.745 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:04.745 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:04.745 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:04.745 00:07:04.745 NVM Specific Namespace Data 00:07:04.745 =========================== 00:07:04.745 Logical Block Storage Tag Mask: 0 00:07:04.745 Protection Information Capabilities: 00:07:04.745 16b Guard Protection Information Storage Tag Support: No 00:07:04.745 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:04.745 Storage Tag Check Read Support: No 00:07:04.745 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.745 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.745 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:04.746 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:04.746 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:05.005 ===================================================== 00:07:05.005 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:05.005 ===================================================== 00:07:05.005 Controller Capabilities/Features 00:07:05.005 ================================ 00:07:05.005 Vendor ID: 1b36 00:07:05.005 Subsystem Vendor ID: 1af4 00:07:05.005 Serial Number: 12342 00:07:05.005 Model Number: QEMU NVMe Ctrl 00:07:05.005 Firmware Version: 8.0.0 00:07:05.005 Recommended Arb Burst: 6 00:07:05.005 IEEE OUI Identifier: 00 54 52 00:07:05.005 Multi-path I/O 00:07:05.005 May have multiple subsystem ports: No 00:07:05.005 May have multiple controllers: No 00:07:05.005 Associated with SR-IOV VF: No 00:07:05.005 Max Data Transfer Size: 524288 00:07:05.005 Max Number of Namespaces: 256 00:07:05.005 Max Number of I/O Queues: 64 00:07:05.005 NVMe Specification Version (VS): 1.4 00:07:05.005 NVMe Specification Version (Identify): 1.4 00:07:05.005 Maximum Queue Entries: 2048 00:07:05.005 Contiguous Queues Required: Yes 00:07:05.005 Arbitration Mechanisms Supported 00:07:05.005 Weighted Round Robin: Not Supported 00:07:05.005 Vendor Specific: Not Supported 00:07:05.005 Reset Timeout: 7500 ms 00:07:05.005 Doorbell Stride: 4 bytes 00:07:05.005 NVM Subsystem Reset: Not Supported 00:07:05.005 Command Sets Supported 00:07:05.005 NVM Command Set: Supported 00:07:05.005 Boot Partition: Not Supported 00:07:05.005 Memory Page Size Minimum: 4096 bytes 00:07:05.005 Memory Page Size Maximum: 65536 bytes 00:07:05.005 Persistent Memory Region: Not Supported 00:07:05.005 Optional Asynchronous Events Supported 00:07:05.005 Namespace Attribute Notices: Supported 00:07:05.005 Firmware Activation Notices: Not Supported 00:07:05.005 ANA Change Notices: Not Supported 00:07:05.005 PLE Aggregate Log Change Notices: Not Supported 00:07:05.005 LBA Status Info Alert Notices: Not Supported 00:07:05.005 EGE Aggregate Log Change Notices: Not Supported 00:07:05.005 Normal NVM Subsystem Shutdown event: Not Supported 00:07:05.005 Zone Descriptor Change Notices: Not Supported 00:07:05.005 Discovery Log Change Notices: Not Supported 00:07:05.005 Controller Attributes 00:07:05.005 128-bit Host Identifier: Not Supported 00:07:05.005 Non-Operational Permissive Mode: Not Supported 00:07:05.005 NVM Sets: Not Supported 00:07:05.005 Read Recovery Levels: Not Supported 00:07:05.005 Endurance Groups: Not Supported 00:07:05.005 Predictable Latency Mode: Not Supported 00:07:05.005 Traffic Based Keep ALive: Not Supported 00:07:05.005 Namespace Granularity: Not Supported 00:07:05.005 SQ Associations: Not Supported 00:07:05.005 UUID List: Not Supported 00:07:05.005 Multi-Domain Subsystem: Not Supported 00:07:05.005 Fixed Capacity Management: Not Supported 00:07:05.005 Variable Capacity Management: Not Supported 00:07:05.005 Delete Endurance Group: Not Supported 00:07:05.005 Delete NVM Set: Not Supported 00:07:05.005 Extended LBA Formats Supported: Supported 00:07:05.005 Flexible Data Placement Supported: Not Supported 00:07:05.005 00:07:05.005 Controller Memory Buffer Support 00:07:05.005 ================================ 00:07:05.005 Supported: No 00:07:05.005 00:07:05.005 Persistent Memory Region Support 00:07:05.005 ================================ 00:07:05.005 Supported: No 00:07:05.005 00:07:05.005 Admin Command Set Attributes 00:07:05.005 ============================ 00:07:05.005 Security Send/Receive: Not Supported 00:07:05.005 Format NVM: Supported 00:07:05.005 Firmware Activate/Download: Not Supported 00:07:05.005 Namespace Management: Supported 00:07:05.005 Device Self-Test: Not Supported 00:07:05.005 Directives: Supported 00:07:05.005 NVMe-MI: Not Supported 00:07:05.005 Virtualization Management: Not Supported 00:07:05.005 Doorbell Buffer Config: Supported 00:07:05.005 Get LBA Status Capability: Not Supported 00:07:05.005 Command & Feature Lockdown Capability: Not Supported 00:07:05.005 Abort Command Limit: 4 00:07:05.005 Async Event Request Limit: 4 00:07:05.005 Number of Firmware Slots: N/A 00:07:05.005 Firmware Slot 1 Read-Only: N/A 00:07:05.005 Firmware Activation Without Reset: N/A 00:07:05.005 Multiple Update Detection Support: N/A 00:07:05.005 Firmware Update Granularity: No Information Provided 00:07:05.005 Per-Namespace SMART Log: Yes 00:07:05.006 Asymmetric Namespace Access Log Page: Not Supported 00:07:05.006 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:05.006 Command Effects Log Page: Supported 00:07:05.006 Get Log Page Extended Data: Supported 00:07:05.006 Telemetry Log Pages: Not Supported 00:07:05.006 Persistent Event Log Pages: Not Supported 00:07:05.006 Supported Log Pages Log Page: May Support 00:07:05.006 Commands Supported & Effects Log Page: Not Supported 00:07:05.006 Feature Identifiers & Effects Log Page:May Support 00:07:05.006 NVMe-MI Commands & Effects Log Page: May Support 00:07:05.006 Data Area 4 for Telemetry Log: Not Supported 00:07:05.006 Error Log Page Entries Supported: 1 00:07:05.006 Keep Alive: Not Supported 00:07:05.006 00:07:05.006 NVM Command Set Attributes 00:07:05.006 ========================== 00:07:05.006 Submission Queue Entry Size 00:07:05.006 Max: 64 00:07:05.006 Min: 64 00:07:05.006 Completion Queue Entry Size 00:07:05.006 Max: 16 00:07:05.006 Min: 16 00:07:05.006 Number of Namespaces: 256 00:07:05.006 Compare Command: Supported 00:07:05.006 Write Uncorrectable Command: Not Supported 00:07:05.006 Dataset Management Command: Supported 00:07:05.006 Write Zeroes Command: Supported 00:07:05.006 Set Features Save Field: Supported 00:07:05.006 Reservations: Not Supported 00:07:05.006 Timestamp: Supported 00:07:05.006 Copy: Supported 00:07:05.006 Volatile Write Cache: Present 00:07:05.006 Atomic Write Unit (Normal): 1 00:07:05.006 Atomic Write Unit (PFail): 1 00:07:05.006 Atomic Compare & Write Unit: 1 00:07:05.006 Fused Compare & Write: Not Supported 00:07:05.006 Scatter-Gather List 00:07:05.006 SGL Command Set: Supported 00:07:05.006 SGL Keyed: Not Supported 00:07:05.006 SGL Bit Bucket Descriptor: Not Supported 00:07:05.006 SGL Metadata Pointer: Not Supported 00:07:05.006 Oversized SGL: Not Supported 00:07:05.006 SGL Metadata Address: Not Supported 00:07:05.006 SGL Offset: Not Supported 00:07:05.006 Transport SGL Data Block: Not Supported 00:07:05.006 Replay Protected Memory Block: Not Supported 00:07:05.006 00:07:05.006 Firmware Slot Information 00:07:05.006 ========================= 00:07:05.006 Active slot: 1 00:07:05.006 Slot 1 Firmware Revision: 1.0 00:07:05.006 00:07:05.006 00:07:05.006 Commands Supported and Effects 00:07:05.006 ============================== 00:07:05.006 Admin Commands 00:07:05.006 -------------- 00:07:05.006 Delete I/O Submission Queue (00h): Supported 00:07:05.006 Create I/O Submission Queue (01h): Supported 00:07:05.006 Get Log Page (02h): Supported 00:07:05.006 Delete I/O Completion Queue (04h): Supported 00:07:05.006 Create I/O Completion Queue (05h): Supported 00:07:05.006 Identify (06h): Supported 00:07:05.006 Abort (08h): Supported 00:07:05.006 Set Features (09h): Supported 00:07:05.006 Get Features (0Ah): Supported 00:07:05.006 Asynchronous Event Request (0Ch): Supported 00:07:05.006 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:05.006 Directive Send (19h): Supported 00:07:05.006 Directive Receive (1Ah): Supported 00:07:05.006 Virtualization Management (1Ch): Supported 00:07:05.006 Doorbell Buffer Config (7Ch): Supported 00:07:05.006 Format NVM (80h): Supported LBA-Change 00:07:05.006 I/O Commands 00:07:05.006 ------------ 00:07:05.006 Flush (00h): Supported LBA-Change 00:07:05.006 Write (01h): Supported LBA-Change 00:07:05.006 Read (02h): Supported 00:07:05.006 Compare (05h): Supported 00:07:05.006 Write Zeroes (08h): Supported LBA-Change 00:07:05.006 Dataset Management (09h): Supported LBA-Change 00:07:05.006 Unknown (0Ch): Supported 00:07:05.006 Unknown (12h): Supported 00:07:05.006 Copy (19h): Supported LBA-Change 00:07:05.006 Unknown (1Dh): Supported LBA-Change 00:07:05.006 00:07:05.006 Error Log 00:07:05.006 ========= 00:07:05.006 00:07:05.006 Arbitration 00:07:05.006 =========== 00:07:05.006 Arbitration Burst: no limit 00:07:05.006 00:07:05.006 Power Management 00:07:05.006 ================ 00:07:05.006 Number of Power States: 1 00:07:05.006 Current Power State: Power State #0 00:07:05.006 Power State #0: 00:07:05.006 Max Power: 25.00 W 00:07:05.006 Non-Operational State: Operational 00:07:05.006 Entry Latency: 16 microseconds 00:07:05.006 Exit Latency: 4 microseconds 00:07:05.006 Relative Read Throughput: 0 00:07:05.006 Relative Read Latency: 0 00:07:05.006 Relative Write Throughput: 0 00:07:05.006 Relative Write Latency: 0 00:07:05.006 Idle Power: Not Reported 00:07:05.006 Active Power: Not Reported 00:07:05.006 Non-Operational Permissive Mode: Not Supported 00:07:05.006 00:07:05.006 Health Information 00:07:05.006 ================== 00:07:05.006 Critical Warnings: 00:07:05.006 Available Spare Space: OK 00:07:05.006 Temperature: OK 00:07:05.006 Device Reliability: OK 00:07:05.006 Read Only: No 00:07:05.006 Volatile Memory Backup: OK 00:07:05.006 Current Temperature: 323 Kelvin (50 Celsius) 00:07:05.006 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:05.006 Available Spare: 0% 00:07:05.006 Available Spare Threshold: 0% 00:07:05.006 Life Percentage Used: 0% 00:07:05.006 Data Units Read: 2160 00:07:05.006 Data Units Written: 1947 00:07:05.006 Host Read Commands: 105072 00:07:05.006 Host Write Commands: 103342 00:07:05.006 Controller Busy Time: 0 minutes 00:07:05.006 Power Cycles: 0 00:07:05.006 Power On Hours: 0 hours 00:07:05.006 Unsafe Shutdowns: 0 00:07:05.006 Unrecoverable Media Errors: 0 00:07:05.006 Lifetime Error Log Entries: 0 00:07:05.006 Warning Temperature Time: 0 minutes 00:07:05.006 Critical Temperature Time: 0 minutes 00:07:05.006 00:07:05.006 Number of Queues 00:07:05.006 ================ 00:07:05.006 Number of I/O Submission Queues: 64 00:07:05.006 Number of I/O Completion Queues: 64 00:07:05.006 00:07:05.006 ZNS Specific Controller Data 00:07:05.006 ============================ 00:07:05.006 Zone Append Size Limit: 0 00:07:05.006 00:07:05.006 00:07:05.006 Active Namespaces 00:07:05.006 ================= 00:07:05.006 Namespace ID:1 00:07:05.006 Error Recovery Timeout: Unlimited 00:07:05.006 Command Set Identifier: NVM (00h) 00:07:05.006 Deallocate: Supported 00:07:05.006 Deallocated/Unwritten Error: Supported 00:07:05.006 Deallocated Read Value: All 0x00 00:07:05.006 Deallocate in Write Zeroes: Not Supported 00:07:05.006 Deallocated Guard Field: 0xFFFF 00:07:05.006 Flush: Supported 00:07:05.006 Reservation: Not Supported 00:07:05.006 Namespace Sharing Capabilities: Private 00:07:05.006 Size (in LBAs): 1048576 (4GiB) 00:07:05.006 Capacity (in LBAs): 1048576 (4GiB) 00:07:05.006 Utilization (in LBAs): 1048576 (4GiB) 00:07:05.006 Thin Provisioning: Not Supported 00:07:05.006 Per-NS Atomic Units: No 00:07:05.006 Maximum Single Source Range Length: 128 00:07:05.006 Maximum Copy Length: 128 00:07:05.006 Maximum Source Range Count: 128 00:07:05.006 NGUID/EUI64 Never Reused: No 00:07:05.006 Namespace Write Protected: No 00:07:05.006 Number of LBA Formats: 8 00:07:05.006 Current LBA Format: LBA Format #04 00:07:05.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:05.006 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:05.006 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:05.006 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:05.006 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:05.006 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:05.006 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:05.006 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:05.006 00:07:05.006 NVM Specific Namespace Data 00:07:05.006 =========================== 00:07:05.006 Logical Block Storage Tag Mask: 0 00:07:05.006 Protection Information Capabilities: 00:07:05.006 16b Guard Protection Information Storage Tag Support: No 00:07:05.006 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:05.006 Storage Tag Check Read Support: No 00:07:05.006 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.006 Namespace ID:2 00:07:05.006 Error Recovery Timeout: Unlimited 00:07:05.006 Command Set Identifier: NVM (00h) 00:07:05.006 Deallocate: Supported 00:07:05.006 Deallocated/Unwritten Error: Supported 00:07:05.006 Deallocated Read Value: All 0x00 00:07:05.007 Deallocate in Write Zeroes: Not Supported 00:07:05.007 Deallocated Guard Field: 0xFFFF 00:07:05.007 Flush: Supported 00:07:05.007 Reservation: Not Supported 00:07:05.007 Namespace Sharing Capabilities: Private 00:07:05.007 Size (in LBAs): 1048576 (4GiB) 00:07:05.007 Capacity (in LBAs): 1048576 (4GiB) 00:07:05.007 Utilization (in LBAs): 1048576 (4GiB) 00:07:05.007 Thin Provisioning: Not Supported 00:07:05.007 Per-NS Atomic Units: No 00:07:05.007 Maximum Single Source Range Length: 128 00:07:05.007 Maximum Copy Length: 128 00:07:05.007 Maximum Source Range Count: 128 00:07:05.007 NGUID/EUI64 Never Reused: No 00:07:05.007 Namespace Write Protected: No 00:07:05.007 Number of LBA Formats: 8 00:07:05.007 Current LBA Format: LBA Format #04 00:07:05.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:05.007 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:05.007 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:05.007 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:05.007 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:05.007 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:05.007 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:05.007 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:05.007 00:07:05.007 NVM Specific Namespace Data 00:07:05.007 =========================== 00:07:05.007 Logical Block Storage Tag Mask: 0 00:07:05.007 Protection Information Capabilities: 00:07:05.007 16b Guard Protection Information Storage Tag Support: No 00:07:05.007 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:05.007 Storage Tag Check Read Support: No 00:07:05.007 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Namespace ID:3 00:07:05.007 Error Recovery Timeout: Unlimited 00:07:05.007 Command Set Identifier: NVM (00h) 00:07:05.007 Deallocate: Supported 00:07:05.007 Deallocated/Unwritten Error: Supported 00:07:05.007 Deallocated Read Value: All 0x00 00:07:05.007 Deallocate in Write Zeroes: Not Supported 00:07:05.007 Deallocated Guard Field: 0xFFFF 00:07:05.007 Flush: Supported 00:07:05.007 Reservation: Not Supported 00:07:05.007 Namespace Sharing Capabilities: Private 00:07:05.007 Size (in LBAs): 1048576 (4GiB) 00:07:05.007 Capacity (in LBAs): 1048576 (4GiB) 00:07:05.007 Utilization (in LBAs): 1048576 (4GiB) 00:07:05.007 Thin Provisioning: Not Supported 00:07:05.007 Per-NS Atomic Units: No 00:07:05.007 Maximum Single Source Range Length: 128 00:07:05.007 Maximum Copy Length: 128 00:07:05.007 Maximum Source Range Count: 128 00:07:05.007 NGUID/EUI64 Never Reused: No 00:07:05.007 Namespace Write Protected: No 00:07:05.007 Number of LBA Formats: 8 00:07:05.007 Current LBA Format: LBA Format #04 00:07:05.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:05.007 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:05.007 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:05.007 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:05.007 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:05.007 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:05.007 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:05.007 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:05.007 00:07:05.007 NVM Specific Namespace Data 00:07:05.007 =========================== 00:07:05.007 Logical Block Storage Tag Mask: 0 00:07:05.007 Protection Information Capabilities: 00:07:05.007 16b Guard Protection Information Storage Tag Support: No 00:07:05.007 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:05.007 Storage Tag Check Read Support: No 00:07:05.007 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.007 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:05.007 10:35:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:05.267 ===================================================== 00:07:05.267 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:05.267 ===================================================== 00:07:05.267 Controller Capabilities/Features 00:07:05.267 ================================ 00:07:05.267 Vendor ID: 1b36 00:07:05.267 Subsystem Vendor ID: 1af4 00:07:05.267 Serial Number: 12343 00:07:05.267 Model Number: QEMU NVMe Ctrl 00:07:05.267 Firmware Version: 8.0.0 00:07:05.267 Recommended Arb Burst: 6 00:07:05.267 IEEE OUI Identifier: 00 54 52 00:07:05.267 Multi-path I/O 00:07:05.267 May have multiple subsystem ports: No 00:07:05.267 May have multiple controllers: Yes 00:07:05.267 Associated with SR-IOV VF: No 00:07:05.267 Max Data Transfer Size: 524288 00:07:05.267 Max Number of Namespaces: 256 00:07:05.267 Max Number of I/O Queues: 64 00:07:05.267 NVMe Specification Version (VS): 1.4 00:07:05.267 NVMe Specification Version (Identify): 1.4 00:07:05.267 Maximum Queue Entries: 2048 00:07:05.267 Contiguous Queues Required: Yes 00:07:05.267 Arbitration Mechanisms Supported 00:07:05.267 Weighted Round Robin: Not Supported 00:07:05.267 Vendor Specific: Not Supported 00:07:05.267 Reset Timeout: 7500 ms 00:07:05.267 Doorbell Stride: 4 bytes 00:07:05.267 NVM Subsystem Reset: Not Supported 00:07:05.267 Command Sets Supported 00:07:05.267 NVM Command Set: Supported 00:07:05.267 Boot Partition: Not Supported 00:07:05.267 Memory Page Size Minimum: 4096 bytes 00:07:05.267 Memory Page Size Maximum: 65536 bytes 00:07:05.267 Persistent Memory Region: Not Supported 00:07:05.267 Optional Asynchronous Events Supported 00:07:05.267 Namespace Attribute Notices: Supported 00:07:05.267 Firmware Activation Notices: Not Supported 00:07:05.267 ANA Change Notices: Not Supported 00:07:05.267 PLE Aggregate Log Change Notices: Not Supported 00:07:05.267 LBA Status Info Alert Notices: Not Supported 00:07:05.267 EGE Aggregate Log Change Notices: Not Supported 00:07:05.267 Normal NVM Subsystem Shutdown event: Not Supported 00:07:05.267 Zone Descriptor Change Notices: Not Supported 00:07:05.267 Discovery Log Change Notices: Not Supported 00:07:05.267 Controller Attributes 00:07:05.267 128-bit Host Identifier: Not Supported 00:07:05.267 Non-Operational Permissive Mode: Not Supported 00:07:05.267 NVM Sets: Not Supported 00:07:05.267 Read Recovery Levels: Not Supported 00:07:05.267 Endurance Groups: Supported 00:07:05.267 Predictable Latency Mode: Not Supported 00:07:05.267 Traffic Based Keep ALive: Not Supported 00:07:05.267 Namespace Granularity: Not Supported 00:07:05.267 SQ Associations: Not Supported 00:07:05.267 UUID List: Not Supported 00:07:05.267 Multi-Domain Subsystem: Not Supported 00:07:05.267 Fixed Capacity Management: Not Supported 00:07:05.267 Variable Capacity Management: Not Supported 00:07:05.267 Delete Endurance Group: Not Supported 00:07:05.267 Delete NVM Set: Not Supported 00:07:05.267 Extended LBA Formats Supported: Supported 00:07:05.267 Flexible Data Placement Supported: Supported 00:07:05.267 00:07:05.267 Controller Memory Buffer Support 00:07:05.267 ================================ 00:07:05.267 Supported: No 00:07:05.267 00:07:05.267 Persistent Memory Region Support 00:07:05.267 ================================ 00:07:05.267 Supported: No 00:07:05.267 00:07:05.267 Admin Command Set Attributes 00:07:05.267 ============================ 00:07:05.267 Security Send/Receive: Not Supported 00:07:05.267 Format NVM: Supported 00:07:05.267 Firmware Activate/Download: Not Supported 00:07:05.267 Namespace Management: Supported 00:07:05.267 Device Self-Test: Not Supported 00:07:05.267 Directives: Supported 00:07:05.267 NVMe-MI: Not Supported 00:07:05.267 Virtualization Management: Not Supported 00:07:05.267 Doorbell Buffer Config: Supported 00:07:05.267 Get LBA Status Capability: Not Supported 00:07:05.267 Command & Feature Lockdown Capability: Not Supported 00:07:05.267 Abort Command Limit: 4 00:07:05.267 Async Event Request Limit: 4 00:07:05.267 Number of Firmware Slots: N/A 00:07:05.267 Firmware Slot 1 Read-Only: N/A 00:07:05.267 Firmware Activation Without Reset: N/A 00:07:05.267 Multiple Update Detection Support: N/A 00:07:05.267 Firmware Update Granularity: No Information Provided 00:07:05.267 Per-Namespace SMART Log: Yes 00:07:05.267 Asymmetric Namespace Access Log Page: Not Supported 00:07:05.267 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:05.267 Command Effects Log Page: Supported 00:07:05.267 Get Log Page Extended Data: Supported 00:07:05.267 Telemetry Log Pages: Not Supported 00:07:05.267 Persistent Event Log Pages: Not Supported 00:07:05.267 Supported Log Pages Log Page: May Support 00:07:05.267 Commands Supported & Effects Log Page: Not Supported 00:07:05.267 Feature Identifiers & Effects Log Page:May Support 00:07:05.267 NVMe-MI Commands & Effects Log Page: May Support 00:07:05.267 Data Area 4 for Telemetry Log: Not Supported 00:07:05.267 Error Log Page Entries Supported: 1 00:07:05.267 Keep Alive: Not Supported 00:07:05.267 00:07:05.267 NVM Command Set Attributes 00:07:05.267 ========================== 00:07:05.267 Submission Queue Entry Size 00:07:05.267 Max: 64 00:07:05.267 Min: 64 00:07:05.267 Completion Queue Entry Size 00:07:05.267 Max: 16 00:07:05.267 Min: 16 00:07:05.267 Number of Namespaces: 256 00:07:05.267 Compare Command: Supported 00:07:05.267 Write Uncorrectable Command: Not Supported 00:07:05.267 Dataset Management Command: Supported 00:07:05.267 Write Zeroes Command: Supported 00:07:05.267 Set Features Save Field: Supported 00:07:05.267 Reservations: Not Supported 00:07:05.267 Timestamp: Supported 00:07:05.268 Copy: Supported 00:07:05.268 Volatile Write Cache: Present 00:07:05.268 Atomic Write Unit (Normal): 1 00:07:05.268 Atomic Write Unit (PFail): 1 00:07:05.268 Atomic Compare & Write Unit: 1 00:07:05.268 Fused Compare & Write: Not Supported 00:07:05.268 Scatter-Gather List 00:07:05.268 SGL Command Set: Supported 00:07:05.268 SGL Keyed: Not Supported 00:07:05.268 SGL Bit Bucket Descriptor: Not Supported 00:07:05.268 SGL Metadata Pointer: Not Supported 00:07:05.268 Oversized SGL: Not Supported 00:07:05.268 SGL Metadata Address: Not Supported 00:07:05.268 SGL Offset: Not Supported 00:07:05.268 Transport SGL Data Block: Not Supported 00:07:05.268 Replay Protected Memory Block: Not Supported 00:07:05.268 00:07:05.268 Firmware Slot Information 00:07:05.268 ========================= 00:07:05.268 Active slot: 1 00:07:05.268 Slot 1 Firmware Revision: 1.0 00:07:05.268 00:07:05.268 00:07:05.268 Commands Supported and Effects 00:07:05.268 ============================== 00:07:05.268 Admin Commands 00:07:05.268 -------------- 00:07:05.268 Delete I/O Submission Queue (00h): Supported 00:07:05.268 Create I/O Submission Queue (01h): Supported 00:07:05.268 Get Log Page (02h): Supported 00:07:05.268 Delete I/O Completion Queue (04h): Supported 00:07:05.268 Create I/O Completion Queue (05h): Supported 00:07:05.268 Identify (06h): Supported 00:07:05.268 Abort (08h): Supported 00:07:05.268 Set Features (09h): Supported 00:07:05.268 Get Features (0Ah): Supported 00:07:05.268 Asynchronous Event Request (0Ch): Supported 00:07:05.268 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:05.268 Directive Send (19h): Supported 00:07:05.268 Directive Receive (1Ah): Supported 00:07:05.268 Virtualization Management (1Ch): Supported 00:07:05.268 Doorbell Buffer Config (7Ch): Supported 00:07:05.268 Format NVM (80h): Supported LBA-Change 00:07:05.268 I/O Commands 00:07:05.268 ------------ 00:07:05.268 Flush (00h): Supported LBA-Change 00:07:05.268 Write (01h): Supported LBA-Change 00:07:05.268 Read (02h): Supported 00:07:05.268 Compare (05h): Supported 00:07:05.268 Write Zeroes (08h): Supported LBA-Change 00:07:05.268 Dataset Management (09h): Supported LBA-Change 00:07:05.268 Unknown (0Ch): Supported 00:07:05.268 Unknown (12h): Supported 00:07:05.268 Copy (19h): Supported LBA-Change 00:07:05.268 Unknown (1Dh): Supported LBA-Change 00:07:05.268 00:07:05.268 Error Log 00:07:05.268 ========= 00:07:05.268 00:07:05.268 Arbitration 00:07:05.268 =========== 00:07:05.268 Arbitration Burst: no limit 00:07:05.268 00:07:05.268 Power Management 00:07:05.268 ================ 00:07:05.268 Number of Power States: 1 00:07:05.268 Current Power State: Power State #0 00:07:05.268 Power State #0: 00:07:05.268 Max Power: 25.00 W 00:07:05.268 Non-Operational State: Operational 00:07:05.268 Entry Latency: 16 microseconds 00:07:05.268 Exit Latency: 4 microseconds 00:07:05.268 Relative Read Throughput: 0 00:07:05.268 Relative Read Latency: 0 00:07:05.268 Relative Write Throughput: 0 00:07:05.268 Relative Write Latency: 0 00:07:05.268 Idle Power: Not Reported 00:07:05.268 Active Power: Not Reported 00:07:05.268 Non-Operational Permissive Mode: Not Supported 00:07:05.268 00:07:05.268 Health Information 00:07:05.268 ================== 00:07:05.268 Critical Warnings: 00:07:05.268 Available Spare Space: OK 00:07:05.268 Temperature: OK 00:07:05.268 Device Reliability: OK 00:07:05.268 Read Only: No 00:07:05.268 Volatile Memory Backup: OK 00:07:05.268 Current Temperature: 323 Kelvin (50 Celsius) 00:07:05.268 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:05.268 Available Spare: 0% 00:07:05.268 Available Spare Threshold: 0% 00:07:05.268 Life Percentage Used: 0% 00:07:05.268 Data Units Read: 778 00:07:05.268 Data Units Written: 708 00:07:05.268 Host Read Commands: 35619 00:07:05.268 Host Write Commands: 35042 00:07:05.268 Controller Busy Time: 0 minutes 00:07:05.268 Power Cycles: 0 00:07:05.268 Power On Hours: 0 hours 00:07:05.268 Unsafe Shutdowns: 0 00:07:05.268 Unrecoverable Media Errors: 0 00:07:05.268 Lifetime Error Log Entries: 0 00:07:05.268 Warning Temperature Time: 0 minutes 00:07:05.268 Critical Temperature Time: 0 minutes 00:07:05.268 00:07:05.268 Number of Queues 00:07:05.268 ================ 00:07:05.268 Number of I/O Submission Queues: 64 00:07:05.268 Number of I/O Completion Queues: 64 00:07:05.268 00:07:05.268 ZNS Specific Controller Data 00:07:05.268 ============================ 00:07:05.268 Zone Append Size Limit: 0 00:07:05.268 00:07:05.268 00:07:05.268 Active Namespaces 00:07:05.268 ================= 00:07:05.268 Namespace ID:1 00:07:05.268 Error Recovery Timeout: Unlimited 00:07:05.268 Command Set Identifier: NVM (00h) 00:07:05.268 Deallocate: Supported 00:07:05.268 Deallocated/Unwritten Error: Supported 00:07:05.268 Deallocated Read Value: All 0x00 00:07:05.268 Deallocate in Write Zeroes: Not Supported 00:07:05.268 Deallocated Guard Field: 0xFFFF 00:07:05.268 Flush: Supported 00:07:05.268 Reservation: Not Supported 00:07:05.268 Namespace Sharing Capabilities: Multiple Controllers 00:07:05.268 Size (in LBAs): 262144 (1GiB) 00:07:05.268 Capacity (in LBAs): 262144 (1GiB) 00:07:05.268 Utilization (in LBAs): 262144 (1GiB) 00:07:05.268 Thin Provisioning: Not Supported 00:07:05.268 Per-NS Atomic Units: No 00:07:05.268 Maximum Single Source Range Length: 128 00:07:05.268 Maximum Copy Length: 128 00:07:05.268 Maximum Source Range Count: 128 00:07:05.268 NGUID/EUI64 Never Reused: No 00:07:05.268 Namespace Write Protected: No 00:07:05.268 Endurance group ID: 1 00:07:05.268 Number of LBA Formats: 8 00:07:05.268 Current LBA Format: LBA Format #04 00:07:05.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:05.268 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:05.268 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:05.268 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:05.268 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:05.268 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:05.268 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:05.268 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:05.268 00:07:05.268 Get Feature FDP: 00:07:05.268 ================ 00:07:05.268 Enabled: Yes 00:07:05.268 FDP configuration index: 0 00:07:05.268 00:07:05.268 FDP configurations log page 00:07:05.268 =========================== 00:07:05.268 Number of FDP configurations: 1 00:07:05.268 Version: 0 00:07:05.268 Size: 112 00:07:05.268 FDP Configuration Descriptor: 0 00:07:05.268 Descriptor Size: 96 00:07:05.268 Reclaim Group Identifier format: 2 00:07:05.268 FDP Volatile Write Cache: Not Present 00:07:05.268 FDP Configuration: Valid 00:07:05.268 Vendor Specific Size: 0 00:07:05.268 Number of Reclaim Groups: 2 00:07:05.268 Number of Recalim Unit Handles: 8 00:07:05.268 Max Placement Identifiers: 128 00:07:05.268 Number of Namespaces Suppprted: 256 00:07:05.268 Reclaim unit Nominal Size: 6000000 bytes 00:07:05.268 Estimated Reclaim Unit Time Limit: Not Reported 00:07:05.268 RUH Desc #000: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #001: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #002: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #003: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #004: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #005: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #006: RUH Type: Initially Isolated 00:07:05.268 RUH Desc #007: RUH Type: Initially Isolated 00:07:05.268 00:07:05.268 FDP reclaim unit handle usage log page 00:07:05.268 ====================================== 00:07:05.268 Number of Reclaim Unit Handles: 8 00:07:05.268 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:05.268 RUH Usage Desc #001: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #002: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #003: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #004: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #005: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #006: RUH Attributes: Unused 00:07:05.268 RUH Usage Desc #007: RUH Attributes: Unused 00:07:05.268 00:07:05.268 FDP statistics log page 00:07:05.268 ======================= 00:07:05.268 Host bytes with metadata written: 456368128 00:07:05.269 Media bytes with metadata written: 456433664 00:07:05.269 Media bytes erased: 0 00:07:05.269 00:07:05.269 FDP events log page 00:07:05.269 =================== 00:07:05.269 Number of FDP events: 0 00:07:05.269 00:07:05.269 NVM Specific Namespace Data 00:07:05.269 =========================== 00:07:05.269 Logical Block Storage Tag Mask: 0 00:07:05.269 Protection Information Capabilities: 00:07:05.269 16b Guard Protection Information Storage Tag Support: No 00:07:05.269 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:05.269 Storage Tag Check Read Support: No 00:07:05.269 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:05.269 ************************************ 00:07:05.269 END TEST nvme_identify 00:07:05.269 ************************************ 00:07:05.269 00:07:05.269 real 0m1.171s 00:07:05.269 user 0m0.427s 00:07:05.269 sys 0m0.513s 00:07:05.269 10:35:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.269 10:35:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:05.269 10:35:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:05.269 10:35:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.269 10:35:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.269 10:35:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.269 ************************************ 00:07:05.269 START TEST nvme_perf 00:07:05.269 ************************************ 00:07:05.269 10:35:31 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:05.269 10:35:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:06.647 Initializing NVMe Controllers 00:07:06.647 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:06.647 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:06.647 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:06.647 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:06.647 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:06.647 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:06.647 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:06.647 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:06.647 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:06.647 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:06.647 Initialization complete. Launching workers. 00:07:06.647 ======================================================== 00:07:06.647 Latency(us) 00:07:06.647 Device Information : IOPS MiB/s Average min max 00:07:06.647 PCIE (0000:00:10.0) NSID 1 from core 0: 12928.00 151.50 9913.71 5547.04 38492.44 00:07:06.647 PCIE (0000:00:11.0) NSID 1 from core 0: 12928.00 151.50 9899.68 5617.30 37033.31 00:07:06.647 PCIE (0000:00:13.0) NSID 1 from core 0: 12928.00 151.50 9883.86 5622.77 36182.64 00:07:06.647 PCIE (0000:00:12.0) NSID 1 from core 0: 12928.00 151.50 9867.45 5588.36 34675.61 00:07:06.647 PCIE (0000:00:12.0) NSID 2 from core 0: 12928.00 151.50 9851.26 5606.74 33480.44 00:07:06.647 PCIE (0000:00:12.0) NSID 3 from core 0: 12992.00 152.25 9787.11 5594.71 26335.64 00:07:06.647 ======================================================== 00:07:06.647 Total : 77632.00 909.75 9867.11 5547.04 38492.44 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5721.797us 00:07:06.647 10.00000% : 5999.065us 00:07:06.647 25.00000% : 6351.951us 00:07:06.647 50.00000% : 7763.495us 00:07:06.647 75.00000% : 13510.498us 00:07:06.647 90.00000% : 15426.166us 00:07:06.647 95.00000% : 16434.412us 00:07:06.647 98.00000% : 17845.957us 00:07:06.647 99.00000% : 18955.028us 00:07:06.647 99.50000% : 30650.683us 00:07:06.647 99.90000% : 38313.354us 00:07:06.647 99.99000% : 38515.003us 00:07:06.647 99.99900% : 38515.003us 00:07:06.647 99.99990% : 38515.003us 00:07:06.647 99.99999% : 38515.003us 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5772.209us 00:07:06.647 10.00000% : 6049.477us 00:07:06.647 25.00000% : 6326.745us 00:07:06.647 50.00000% : 7763.495us 00:07:06.647 75.00000% : 13409.674us 00:07:06.647 90.00000% : 15426.166us 00:07:06.647 95.00000% : 16535.237us 00:07:06.647 98.00000% : 17644.308us 00:07:06.647 99.00000% : 18652.554us 00:07:06.647 99.50000% : 29239.138us 00:07:06.647 99.90000% : 36901.809us 00:07:06.647 99.99000% : 37103.458us 00:07:06.647 99.99900% : 37103.458us 00:07:06.647 99.99990% : 37103.458us 00:07:06.647 99.99999% : 37103.458us 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5797.415us 00:07:06.647 10.00000% : 6049.477us 00:07:06.647 25.00000% : 6326.745us 00:07:06.647 50.00000% : 7763.495us 00:07:06.647 75.00000% : 13409.674us 00:07:06.647 90.00000% : 15426.166us 00:07:06.647 95.00000% : 16535.237us 00:07:06.647 98.00000% : 17644.308us 00:07:06.647 99.00000% : 18450.905us 00:07:06.647 99.50000% : 28835.840us 00:07:06.647 99.90000% : 36095.212us 00:07:06.647 99.99000% : 36296.862us 00:07:06.647 99.99900% : 36296.862us 00:07:06.647 99.99990% : 36296.862us 00:07:06.647 99.99999% : 36296.862us 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5772.209us 00:07:06.647 10.00000% : 6049.477us 00:07:06.647 25.00000% : 6351.951us 00:07:06.647 50.00000% : 7612.258us 00:07:06.647 75.00000% : 13409.674us 00:07:06.647 90.00000% : 15426.166us 00:07:06.647 95.00000% : 16535.237us 00:07:06.647 98.00000% : 17845.957us 00:07:06.647 99.00000% : 18652.554us 00:07:06.647 99.50000% : 27827.594us 00:07:06.647 99.90000% : 34482.018us 00:07:06.647 99.99000% : 34683.668us 00:07:06.647 99.99900% : 34683.668us 00:07:06.647 99.99990% : 34683.668us 00:07:06.647 99.99999% : 34683.668us 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5797.415us 00:07:06.647 10.00000% : 6049.477us 00:07:06.647 25.00000% : 6326.745us 00:07:06.647 50.00000% : 7511.434us 00:07:06.647 75.00000% : 13510.498us 00:07:06.647 90.00000% : 15224.517us 00:07:06.647 95.00000% : 16333.588us 00:07:06.647 98.00000% : 17946.782us 00:07:06.647 99.00000% : 18753.378us 00:07:06.647 99.50000% : 26819.348us 00:07:06.647 99.90000% : 33272.123us 00:07:06.647 99.99000% : 33473.772us 00:07:06.647 99.99900% : 33675.422us 00:07:06.647 99.99990% : 33675.422us 00:07:06.647 99.99999% : 33675.422us 00:07:06.647 00:07:06.647 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:06.647 ================================================================================= 00:07:06.647 1.00000% : 5772.209us 00:07:06.647 10.00000% : 6049.477us 00:07:06.647 25.00000% : 6351.951us 00:07:06.647 50.00000% : 7662.671us 00:07:06.647 75.00000% : 13510.498us 00:07:06.647 90.00000% : 15224.517us 00:07:06.647 95.00000% : 16434.412us 00:07:06.647 98.00000% : 17745.132us 00:07:06.647 99.00000% : 18148.431us 00:07:06.647 99.50000% : 18955.028us 00:07:06.647 99.90000% : 26214.400us 00:07:06.647 99.99000% : 26416.049us 00:07:06.647 99.99900% : 26416.049us 00:07:06.647 99.99990% : 26416.049us 00:07:06.647 99.99999% : 26416.049us 00:07:06.647 00:07:06.647 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:06.647 ============================================================================== 00:07:06.647 Range in us Cumulative IO count 00:07:06.647 5545.354 - 5570.560: 0.0464% ( 6) 00:07:06.647 5570.560 - 5595.766: 0.1315% ( 11) 00:07:06.647 5595.766 - 5620.972: 0.2553% ( 16) 00:07:06.647 5620.972 - 5646.178: 0.3868% ( 17) 00:07:06.647 5646.178 - 5671.385: 0.5492% ( 21) 00:07:06.647 5671.385 - 5696.591: 0.7967% ( 32) 00:07:06.647 5696.591 - 5721.797: 1.1448% ( 45) 00:07:06.647 5721.797 - 5747.003: 1.6553% ( 66) 00:07:06.647 5747.003 - 5772.209: 2.2045% ( 71) 00:07:06.647 5772.209 - 5797.415: 2.8852% ( 88) 00:07:06.647 5797.415 - 5822.622: 3.6433% ( 98) 00:07:06.647 5822.622 - 5847.828: 4.6024% ( 124) 00:07:06.647 5847.828 - 5873.034: 5.4455% ( 109) 00:07:06.647 5873.034 - 5898.240: 6.3506% ( 117) 00:07:06.647 5898.240 - 5923.446: 7.2246% ( 113) 00:07:06.647 5923.446 - 5948.652: 8.1915% ( 125) 00:07:06.647 5948.652 - 5973.858: 9.1894% ( 129) 00:07:06.647 5973.858 - 5999.065: 10.2645% ( 139) 00:07:06.647 5999.065 - 6024.271: 11.2701% ( 130) 00:07:06.647 6024.271 - 6049.477: 12.2757% ( 130) 00:07:06.647 6049.477 - 6074.683: 13.3199% ( 135) 00:07:06.647 6074.683 - 6099.889: 14.3255% ( 130) 00:07:06.647 6099.889 - 6125.095: 15.6405% ( 170) 00:07:06.647 6125.095 - 6150.302: 16.7234% ( 140) 00:07:06.647 6150.302 - 6175.508: 17.8218% ( 142) 00:07:06.647 6175.508 - 6200.714: 18.9202% ( 142) 00:07:06.647 6200.714 - 6225.920: 19.9489% ( 133) 00:07:06.647 6225.920 - 6251.126: 21.1556% ( 156) 00:07:06.647 6251.126 - 6276.332: 22.1921% ( 134) 00:07:06.647 6276.332 - 6301.538: 23.4452% ( 162) 00:07:06.647 6301.538 - 6326.745: 24.6364% ( 154) 00:07:06.647 6326.745 - 6351.951: 25.7580% ( 145) 00:07:06.647 6351.951 - 6377.157: 26.9415% ( 153) 00:07:06.647 6377.157 - 6402.363: 28.0786% ( 147) 00:07:06.647 6402.363 - 6427.569: 29.1151% ( 134) 00:07:06.647 6427.569 - 6452.775: 30.2676% ( 149) 00:07:06.647 6452.775 - 6503.188: 32.4025% ( 276) 00:07:06.647 6503.188 - 6553.600: 34.5452% ( 277) 00:07:06.647 6553.600 - 6604.012: 36.6569% ( 273) 00:07:06.647 6604.012 - 6654.425: 38.4669% ( 234) 00:07:06.647 6654.425 - 6704.837: 39.8360% ( 177) 00:07:06.647 6704.837 - 6755.249: 41.0195% ( 153) 00:07:06.647 6755.249 - 6805.662: 42.0019% ( 127) 00:07:06.648 6805.662 - 6856.074: 42.9146% ( 118) 00:07:06.648 6856.074 - 6906.486: 43.7113% ( 103) 00:07:06.648 6906.486 - 6956.898: 44.3843% ( 87) 00:07:06.648 6956.898 - 7007.311: 45.0031% ( 80) 00:07:06.648 7007.311 - 7057.723: 45.4904% ( 63) 00:07:06.648 7057.723 - 7108.135: 46.0241% ( 69) 00:07:06.648 7108.135 - 7158.548: 46.4728% ( 58) 00:07:06.648 7158.548 - 7208.960: 46.9214% ( 58) 00:07:06.648 7208.960 - 7259.372: 47.3082% ( 50) 00:07:06.648 7259.372 - 7309.785: 47.6408% ( 43) 00:07:06.648 7309.785 - 7360.197: 47.9657% ( 42) 00:07:06.648 7360.197 - 7410.609: 48.2905% ( 42) 00:07:06.648 7410.609 - 7461.022: 48.6077% ( 41) 00:07:06.648 7461.022 - 7511.434: 48.9325% ( 42) 00:07:06.648 7511.434 - 7561.846: 49.2110% ( 36) 00:07:06.648 7561.846 - 7612.258: 49.4972% ( 37) 00:07:06.648 7612.258 - 7662.671: 49.7757% ( 36) 00:07:06.648 7662.671 - 7713.083: 49.9691% ( 25) 00:07:06.648 7713.083 - 7763.495: 50.1779% ( 27) 00:07:06.648 7763.495 - 7813.908: 50.3790% ( 26) 00:07:06.648 7813.908 - 7864.320: 50.5492% ( 22) 00:07:06.648 7864.320 - 7914.732: 50.7348% ( 24) 00:07:06.648 7914.732 - 7965.145: 50.8818% ( 19) 00:07:06.648 7965.145 - 8015.557: 51.0210% ( 18) 00:07:06.648 8015.557 - 8065.969: 51.1757% ( 20) 00:07:06.648 8065.969 - 8116.382: 51.3382% ( 21) 00:07:06.648 8116.382 - 8166.794: 51.4078% ( 9) 00:07:06.648 8166.794 - 8217.206: 51.5780% ( 22) 00:07:06.648 8217.206 - 8267.618: 51.7017% ( 16) 00:07:06.648 8267.618 - 8318.031: 51.8332% ( 17) 00:07:06.648 8318.031 - 8368.443: 51.9338% ( 13) 00:07:06.648 8368.443 - 8418.855: 52.0266% ( 12) 00:07:06.648 8418.855 - 8469.268: 52.1504% ( 16) 00:07:06.648 8469.268 - 8519.680: 52.2432% ( 12) 00:07:06.648 8519.680 - 8570.092: 52.3592% ( 15) 00:07:06.648 8570.092 - 8620.505: 52.4675% ( 14) 00:07:06.648 8620.505 - 8670.917: 52.6067% ( 18) 00:07:06.648 8670.917 - 8721.329: 52.7460% ( 18) 00:07:06.648 8721.329 - 8771.742: 52.8929% ( 19) 00:07:06.648 8771.742 - 8822.154: 53.0941% ( 26) 00:07:06.648 8822.154 - 8872.566: 53.2488% ( 20) 00:07:06.648 8872.566 - 8922.978: 53.3957% ( 19) 00:07:06.648 8922.978 - 8973.391: 53.5659% ( 22) 00:07:06.648 8973.391 - 9023.803: 53.7283% ( 21) 00:07:06.648 9023.803 - 9074.215: 53.8753% ( 19) 00:07:06.648 9074.215 - 9124.628: 54.0764% ( 26) 00:07:06.648 9124.628 - 9175.040: 54.2930% ( 28) 00:07:06.648 9175.040 - 9225.452: 54.4864% ( 25) 00:07:06.648 9225.452 - 9275.865: 54.7416% ( 33) 00:07:06.648 9275.865 - 9326.277: 54.9737% ( 30) 00:07:06.648 9326.277 - 9376.689: 55.1980% ( 29) 00:07:06.648 9376.689 - 9427.102: 55.4688% ( 35) 00:07:06.648 9427.102 - 9477.514: 55.7008% ( 30) 00:07:06.648 9477.514 - 9527.926: 55.9715% ( 35) 00:07:06.648 9527.926 - 9578.338: 56.1804% ( 27) 00:07:06.648 9578.338 - 9628.751: 56.4511% ( 35) 00:07:06.648 9628.751 - 9679.163: 56.6677% ( 28) 00:07:06.648 9679.163 - 9729.575: 56.8611% ( 25) 00:07:06.648 9729.575 - 9779.988: 57.0390% ( 23) 00:07:06.648 9779.988 - 9830.400: 57.2710% ( 30) 00:07:06.648 9830.400 - 9880.812: 57.4644% ( 25) 00:07:06.648 9880.812 - 9931.225: 57.7506% ( 37) 00:07:06.648 9931.225 - 9981.637: 58.0059% ( 33) 00:07:06.648 9981.637 - 10032.049: 58.2611% ( 33) 00:07:06.648 10032.049 - 10082.462: 58.4700% ( 27) 00:07:06.648 10082.462 - 10132.874: 58.8800% ( 53) 00:07:06.648 10132.874 - 10183.286: 59.0656% ( 24) 00:07:06.648 10183.286 - 10233.698: 59.2822% ( 28) 00:07:06.648 10233.698 - 10284.111: 59.4291% ( 19) 00:07:06.648 10284.111 - 10334.523: 59.6535% ( 29) 00:07:06.648 10334.523 - 10384.935: 59.8159% ( 21) 00:07:06.648 10384.935 - 10435.348: 59.9706% ( 20) 00:07:06.648 10435.348 - 10485.760: 60.1098% ( 18) 00:07:06.648 10485.760 - 10536.172: 60.2336% ( 16) 00:07:06.648 10536.172 - 10586.585: 60.3883% ( 20) 00:07:06.648 10586.585 - 10636.997: 60.5353% ( 19) 00:07:06.648 10636.997 - 10687.409: 60.7364% ( 26) 00:07:06.648 10687.409 - 10737.822: 60.9762% ( 31) 00:07:06.648 10737.822 - 10788.234: 61.1618% ( 24) 00:07:06.648 10788.234 - 10838.646: 61.3475% ( 24) 00:07:06.648 10838.646 - 10889.058: 61.5563% ( 27) 00:07:06.648 10889.058 - 10939.471: 61.7033% ( 19) 00:07:06.648 10939.471 - 10989.883: 61.8580% ( 20) 00:07:06.648 10989.883 - 11040.295: 61.9972% ( 18) 00:07:06.648 11040.295 - 11090.708: 62.1132% ( 15) 00:07:06.648 11090.708 - 11141.120: 62.2370% ( 16) 00:07:06.648 11141.120 - 11191.532: 62.3608% ( 16) 00:07:06.648 11191.532 - 11241.945: 62.4691% ( 14) 00:07:06.648 11241.945 - 11292.357: 62.5696% ( 13) 00:07:06.648 11292.357 - 11342.769: 62.6392% ( 9) 00:07:06.648 11342.769 - 11393.182: 62.7707% ( 17) 00:07:06.648 11393.182 - 11443.594: 62.8790% ( 14) 00:07:06.648 11443.594 - 11494.006: 63.0492% ( 22) 00:07:06.648 11494.006 - 11544.418: 63.2116% ( 21) 00:07:06.648 11544.418 - 11594.831: 63.3663% ( 20) 00:07:06.648 11594.831 - 11645.243: 63.4669% ( 13) 00:07:06.648 11645.243 - 11695.655: 63.7222% ( 33) 00:07:06.648 11695.655 - 11746.068: 63.9001% ( 23) 00:07:06.648 11746.068 - 11796.480: 64.0857% ( 24) 00:07:06.648 11796.480 - 11846.892: 64.3332% ( 32) 00:07:06.648 11846.892 - 11897.305: 64.6272% ( 38) 00:07:06.648 11897.305 - 11947.717: 64.8205% ( 25) 00:07:06.648 11947.717 - 11998.129: 65.0681% ( 32) 00:07:06.648 11998.129 - 12048.542: 65.3311% ( 34) 00:07:06.648 12048.542 - 12098.954: 65.5863% ( 33) 00:07:06.648 12098.954 - 12149.366: 65.9189% ( 43) 00:07:06.648 12149.366 - 12199.778: 66.1819% ( 34) 00:07:06.648 12199.778 - 12250.191: 66.4836% ( 39) 00:07:06.648 12250.191 - 12300.603: 66.7775% ( 38) 00:07:06.648 12300.603 - 12351.015: 67.1101% ( 43) 00:07:06.648 12351.015 - 12401.428: 67.4582% ( 45) 00:07:06.648 12401.428 - 12451.840: 67.7522% ( 38) 00:07:06.648 12451.840 - 12502.252: 68.0229% ( 35) 00:07:06.648 12502.252 - 12552.665: 68.4329% ( 53) 00:07:06.648 12552.665 - 12603.077: 68.8428% ( 53) 00:07:06.648 12603.077 - 12653.489: 69.1754% ( 43) 00:07:06.648 12653.489 - 12703.902: 69.4771% ( 39) 00:07:06.648 12703.902 - 12754.314: 69.8097% ( 43) 00:07:06.648 12754.314 - 12804.726: 70.0650% ( 33) 00:07:06.648 12804.726 - 12855.138: 70.4440% ( 49) 00:07:06.648 12855.138 - 12905.551: 70.7921% ( 45) 00:07:06.648 12905.551 - 13006.375: 71.5269% ( 95) 00:07:06.648 13006.375 - 13107.200: 72.3159% ( 102) 00:07:06.648 13107.200 - 13208.025: 73.1745% ( 111) 00:07:06.648 13208.025 - 13308.849: 73.9016% ( 94) 00:07:06.648 13308.849 - 13409.674: 74.6906% ( 102) 00:07:06.648 13409.674 - 13510.498: 75.6498% ( 124) 00:07:06.648 13510.498 - 13611.323: 76.5161% ( 112) 00:07:06.648 13611.323 - 13712.148: 77.6532% ( 147) 00:07:06.648 13712.148 - 13812.972: 78.5736% ( 119) 00:07:06.648 13812.972 - 13913.797: 79.5715% ( 129) 00:07:06.648 13913.797 - 14014.622: 80.7085% ( 147) 00:07:06.648 14014.622 - 14115.446: 81.7837% ( 139) 00:07:06.648 14115.446 - 14216.271: 82.8512% ( 138) 00:07:06.648 14216.271 - 14317.095: 83.6788% ( 107) 00:07:06.648 14317.095 - 14417.920: 84.6612% ( 127) 00:07:06.648 14417.920 - 14518.745: 85.5353% ( 113) 00:07:06.648 14518.745 - 14619.569: 86.2392% ( 91) 00:07:06.648 14619.569 - 14720.394: 86.8735% ( 82) 00:07:06.648 14720.394 - 14821.218: 87.5309% ( 85) 00:07:06.648 14821.218 - 14922.043: 88.2735% ( 96) 00:07:06.648 14922.043 - 15022.868: 88.7222% ( 58) 00:07:06.648 15022.868 - 15123.692: 89.1940% ( 61) 00:07:06.648 15123.692 - 15224.517: 89.6504% ( 59) 00:07:06.648 15224.517 - 15325.342: 89.9830% ( 43) 00:07:06.648 15325.342 - 15426.166: 90.4084% ( 55) 00:07:06.648 15426.166 - 15526.991: 90.8029% ( 51) 00:07:06.648 15526.991 - 15627.815: 91.3289% ( 68) 00:07:06.648 15627.815 - 15728.640: 91.7543% ( 55) 00:07:06.648 15728.640 - 15829.465: 92.2030% ( 58) 00:07:06.648 15829.465 - 15930.289: 92.7599% ( 72) 00:07:06.648 15930.289 - 16031.114: 93.2704% ( 66) 00:07:06.648 16031.114 - 16131.938: 93.6881% ( 54) 00:07:06.648 16131.938 - 16232.763: 94.1136% ( 55) 00:07:06.648 16232.763 - 16333.588: 94.6318% ( 67) 00:07:06.648 16333.588 - 16434.412: 95.0340% ( 52) 00:07:06.648 16434.412 - 16535.237: 95.4517% ( 54) 00:07:06.648 16535.237 - 16636.062: 95.7998% ( 45) 00:07:06.648 16636.062 - 16736.886: 96.0009% ( 26) 00:07:06.648 16736.886 - 16837.711: 96.2794% ( 36) 00:07:06.648 16837.711 - 16938.535: 96.4728% ( 25) 00:07:06.648 16938.535 - 17039.360: 96.5965% ( 16) 00:07:06.648 17039.360 - 17140.185: 96.7822% ( 24) 00:07:06.648 17140.185 - 17241.009: 96.9291% ( 19) 00:07:06.648 17241.009 - 17341.834: 97.1999% ( 35) 00:07:06.648 17341.834 - 17442.658: 97.3236% ( 16) 00:07:06.648 17442.658 - 17543.483: 97.5402% ( 28) 00:07:06.648 17543.483 - 17644.308: 97.7336% ( 25) 00:07:06.648 17644.308 - 17745.132: 97.9038% ( 22) 00:07:06.648 17745.132 - 17845.957: 98.0894% ( 24) 00:07:06.648 17845.957 - 17946.782: 98.2519% ( 21) 00:07:06.648 17946.782 - 18047.606: 98.4839% ( 30) 00:07:06.648 18047.606 - 18148.431: 98.5922% ( 14) 00:07:06.648 18148.431 - 18249.255: 98.7005% ( 14) 00:07:06.648 18249.255 - 18350.080: 98.7778% ( 10) 00:07:06.648 18350.080 - 18450.905: 98.8475% ( 9) 00:07:06.648 18450.905 - 18551.729: 98.9016% ( 7) 00:07:06.648 18551.729 - 18652.554: 98.9325% ( 4) 00:07:06.648 18652.554 - 18753.378: 98.9480% ( 2) 00:07:06.648 18753.378 - 18854.203: 98.9558% ( 1) 00:07:06.648 18854.203 - 18955.028: 99.0099% ( 7) 00:07:06.648 28835.840 - 29037.489: 99.0176% ( 1) 00:07:06.648 29037.489 - 29239.138: 99.0795% ( 8) 00:07:06.648 29239.138 - 29440.788: 99.1414% ( 8) 00:07:06.648 29440.788 - 29642.437: 99.2110% ( 9) 00:07:06.649 29642.437 - 29844.086: 99.2652% ( 7) 00:07:06.649 29844.086 - 30045.735: 99.3270% ( 8) 00:07:06.649 30045.735 - 30247.385: 99.3967% ( 9) 00:07:06.649 30247.385 - 30449.034: 99.4585% ( 8) 00:07:06.649 30449.034 - 30650.683: 99.5050% ( 6) 00:07:06.649 36700.160 - 36901.809: 99.5204% ( 2) 00:07:06.649 36901.809 - 37103.458: 99.5746% ( 7) 00:07:06.649 37103.458 - 37305.108: 99.6364% ( 8) 00:07:06.649 37305.108 - 37506.757: 99.6906% ( 7) 00:07:06.649 37506.757 - 37708.406: 99.7602% ( 9) 00:07:06.649 37708.406 - 37910.055: 99.8221% ( 8) 00:07:06.649 37910.055 - 38111.705: 99.8840% ( 8) 00:07:06.649 38111.705 - 38313.354: 99.9459% ( 8) 00:07:06.649 38313.354 - 38515.003: 100.0000% ( 7) 00:07:06.649 00:07:06.649 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:06.649 ============================================================================== 00:07:06.649 Range in us Cumulative IO count 00:07:06.649 5595.766 - 5620.972: 0.0077% ( 1) 00:07:06.649 5620.972 - 5646.178: 0.0541% ( 6) 00:07:06.649 5646.178 - 5671.385: 0.1238% ( 9) 00:07:06.649 5671.385 - 5696.591: 0.2166% ( 12) 00:07:06.649 5696.591 - 5721.797: 0.4254% ( 27) 00:07:06.649 5721.797 - 5747.003: 0.7426% ( 41) 00:07:06.649 5747.003 - 5772.209: 1.1216% ( 49) 00:07:06.649 5772.209 - 5797.415: 1.5316% ( 53) 00:07:06.649 5797.415 - 5822.622: 2.1349% ( 78) 00:07:06.649 5822.622 - 5847.828: 2.6532% ( 67) 00:07:06.649 5847.828 - 5873.034: 3.3725% ( 93) 00:07:06.649 5873.034 - 5898.240: 4.2543% ( 114) 00:07:06.649 5898.240 - 5923.446: 5.3063% ( 136) 00:07:06.649 5923.446 - 5948.652: 6.2964% ( 128) 00:07:06.649 5948.652 - 5973.858: 7.2942% ( 129) 00:07:06.649 5973.858 - 5999.065: 8.4313% ( 147) 00:07:06.649 5999.065 - 6024.271: 9.5993% ( 151) 00:07:06.649 6024.271 - 6049.477: 10.8601% ( 163) 00:07:06.649 6049.477 - 6074.683: 12.0436% ( 153) 00:07:06.649 6074.683 - 6099.889: 13.3586% ( 170) 00:07:06.649 6099.889 - 6125.095: 14.6504% ( 167) 00:07:06.649 6125.095 - 6150.302: 15.9653% ( 170) 00:07:06.649 6150.302 - 6175.508: 17.2107% ( 161) 00:07:06.649 6175.508 - 6200.714: 18.5257% ( 170) 00:07:06.649 6200.714 - 6225.920: 19.8329% ( 169) 00:07:06.649 6225.920 - 6251.126: 21.1170% ( 166) 00:07:06.649 6251.126 - 6276.332: 22.4319% ( 170) 00:07:06.649 6276.332 - 6301.538: 23.8243% ( 180) 00:07:06.649 6301.538 - 6326.745: 25.2321% ( 182) 00:07:06.649 6326.745 - 6351.951: 26.5393% ( 169) 00:07:06.649 6351.951 - 6377.157: 27.8233% ( 166) 00:07:06.649 6377.157 - 6402.363: 29.1383% ( 170) 00:07:06.649 6402.363 - 6427.569: 30.4455% ( 169) 00:07:06.649 6427.569 - 6452.775: 31.6213% ( 152) 00:07:06.649 6452.775 - 6503.188: 33.8490% ( 288) 00:07:06.649 6503.188 - 6553.600: 35.8756% ( 262) 00:07:06.649 6553.600 - 6604.012: 37.7475% ( 242) 00:07:06.649 6604.012 - 6654.425: 39.4570% ( 221) 00:07:06.649 6654.425 - 6704.837: 40.7488% ( 167) 00:07:06.649 6704.837 - 6755.249: 41.7621% ( 131) 00:07:06.649 6755.249 - 6805.662: 42.7058% ( 122) 00:07:06.649 6805.662 - 6856.074: 43.4715% ( 99) 00:07:06.649 6856.074 - 6906.486: 44.0981% ( 81) 00:07:06.649 6906.486 - 6956.898: 44.6086% ( 66) 00:07:06.649 6956.898 - 7007.311: 45.0882% ( 62) 00:07:06.649 7007.311 - 7057.723: 45.5678% ( 62) 00:07:06.649 7057.723 - 7108.135: 45.9700% ( 52) 00:07:06.649 7108.135 - 7158.548: 46.3490% ( 49) 00:07:06.649 7158.548 - 7208.960: 46.6739% ( 42) 00:07:06.649 7208.960 - 7259.372: 47.0142% ( 44) 00:07:06.649 7259.372 - 7309.785: 47.3623% ( 45) 00:07:06.649 7309.785 - 7360.197: 47.7723% ( 53) 00:07:06.649 7360.197 - 7410.609: 48.1358% ( 47) 00:07:06.649 7410.609 - 7461.022: 48.4607% ( 42) 00:07:06.649 7461.022 - 7511.434: 48.7856% ( 42) 00:07:06.649 7511.434 - 7561.846: 49.0408% ( 33) 00:07:06.649 7561.846 - 7612.258: 49.2961% ( 33) 00:07:06.649 7612.258 - 7662.671: 49.5359% ( 31) 00:07:06.649 7662.671 - 7713.083: 49.8066% ( 35) 00:07:06.649 7713.083 - 7763.495: 50.0309% ( 29) 00:07:06.649 7763.495 - 7813.908: 50.2785% ( 32) 00:07:06.649 7813.908 - 7864.320: 50.5183% ( 31) 00:07:06.649 7864.320 - 7914.732: 50.7503% ( 30) 00:07:06.649 7914.732 - 7965.145: 50.9514% ( 26) 00:07:06.649 7965.145 - 8015.557: 51.1293% ( 23) 00:07:06.649 8015.557 - 8065.969: 51.3072% ( 23) 00:07:06.649 8065.969 - 8116.382: 51.4774% ( 22) 00:07:06.649 8116.382 - 8166.794: 51.6476% ( 22) 00:07:06.649 8166.794 - 8217.206: 51.7791% ( 17) 00:07:06.649 8217.206 - 8267.618: 51.8796% ( 13) 00:07:06.649 8267.618 - 8318.031: 51.9725% ( 12) 00:07:06.649 8318.031 - 8368.443: 52.0653% ( 12) 00:07:06.649 8368.443 - 8418.855: 52.1581% ( 12) 00:07:06.649 8418.855 - 8469.268: 52.2432% ( 11) 00:07:06.649 8469.268 - 8519.680: 52.3360% ( 12) 00:07:06.649 8519.680 - 8570.092: 52.4134% ( 10) 00:07:06.649 8570.092 - 8620.505: 52.5294% ( 15) 00:07:06.649 8620.505 - 8670.917: 52.6454% ( 15) 00:07:06.649 8670.917 - 8721.329: 52.7847% ( 18) 00:07:06.649 8721.329 - 8771.742: 52.9780% ( 25) 00:07:06.649 8771.742 - 8822.154: 53.1250% ( 19) 00:07:06.649 8822.154 - 8872.566: 53.3029% ( 23) 00:07:06.649 8872.566 - 8922.978: 53.4499% ( 19) 00:07:06.649 8922.978 - 8973.391: 53.6200% ( 22) 00:07:06.649 8973.391 - 9023.803: 53.8057% ( 24) 00:07:06.649 9023.803 - 9074.215: 53.9836% ( 23) 00:07:06.649 9074.215 - 9124.628: 54.1692% ( 24) 00:07:06.649 9124.628 - 9175.040: 54.4013% ( 30) 00:07:06.649 9175.040 - 9225.452: 54.6101% ( 27) 00:07:06.649 9225.452 - 9275.865: 54.8499% ( 31) 00:07:06.649 9275.865 - 9326.277: 55.0975% ( 32) 00:07:06.649 9326.277 - 9376.689: 55.3218% ( 29) 00:07:06.649 9376.689 - 9427.102: 55.5848% ( 34) 00:07:06.649 9427.102 - 9477.514: 55.8787% ( 38) 00:07:06.649 9477.514 - 9527.926: 56.2268% ( 45) 00:07:06.649 9527.926 - 9578.338: 56.5362% ( 40) 00:07:06.649 9578.338 - 9628.751: 56.8069% ( 35) 00:07:06.649 9628.751 - 9679.163: 57.0777% ( 35) 00:07:06.649 9679.163 - 9729.575: 57.3639% ( 37) 00:07:06.649 9729.575 - 9779.988: 57.6269% ( 34) 00:07:06.649 9779.988 - 9830.400: 57.9285% ( 39) 00:07:06.649 9830.400 - 9880.812: 58.2379% ( 40) 00:07:06.649 9880.812 - 9931.225: 58.5164% ( 36) 00:07:06.649 9931.225 - 9981.637: 58.7949% ( 36) 00:07:06.649 9981.637 - 10032.049: 59.0347% ( 31) 00:07:06.649 10032.049 - 10082.462: 59.2435% ( 27) 00:07:06.649 10082.462 - 10132.874: 59.4601% ( 28) 00:07:06.649 10132.874 - 10183.286: 59.6612% ( 26) 00:07:06.649 10183.286 - 10233.698: 59.8778% ( 28) 00:07:06.649 10233.698 - 10284.111: 60.0557% ( 23) 00:07:06.649 10284.111 - 10334.523: 60.1717% ( 15) 00:07:06.649 10334.523 - 10384.935: 60.2645% ( 12) 00:07:06.649 10384.935 - 10435.348: 60.3574% ( 12) 00:07:06.649 10435.348 - 10485.760: 60.4347% ( 10) 00:07:06.649 10485.760 - 10536.172: 60.4966% ( 8) 00:07:06.649 10536.172 - 10586.585: 60.5739% ( 10) 00:07:06.649 10586.585 - 10636.997: 60.6436% ( 9) 00:07:06.649 10636.997 - 10687.409: 60.6900% ( 6) 00:07:06.649 10687.409 - 10737.822: 60.7441% ( 7) 00:07:06.649 10737.822 - 10788.234: 60.7983% ( 7) 00:07:06.649 10788.234 - 10838.646: 60.8601% ( 8) 00:07:06.649 10838.646 - 10889.058: 60.9220% ( 8) 00:07:06.649 10889.058 - 10939.471: 60.9839% ( 8) 00:07:06.649 10939.471 - 10989.883: 61.0381% ( 7) 00:07:06.649 10989.883 - 11040.295: 61.0845% ( 6) 00:07:06.649 11040.295 - 11090.708: 61.1618% ( 10) 00:07:06.649 11090.708 - 11141.120: 61.3397% ( 23) 00:07:06.649 11141.120 - 11191.532: 61.4558% ( 15) 00:07:06.649 11191.532 - 11241.945: 61.5950% ( 18) 00:07:06.649 11241.945 - 11292.357: 61.8038% ( 27) 00:07:06.649 11292.357 - 11342.769: 61.9585% ( 20) 00:07:06.649 11342.769 - 11393.182: 62.1210% ( 21) 00:07:06.649 11393.182 - 11443.594: 62.2602% ( 18) 00:07:06.649 11443.594 - 11494.006: 62.3994% ( 18) 00:07:06.649 11494.006 - 11544.418: 62.5309% ( 17) 00:07:06.649 11544.418 - 11594.831: 62.6624% ( 17) 00:07:06.649 11594.831 - 11645.243: 62.7862% ( 16) 00:07:06.649 11645.243 - 11695.655: 62.9254% ( 18) 00:07:06.649 11695.655 - 11746.068: 63.0879% ( 21) 00:07:06.649 11746.068 - 11796.480: 63.3277% ( 31) 00:07:06.649 11796.480 - 11846.892: 63.5675% ( 31) 00:07:06.649 11846.892 - 11897.305: 63.8459% ( 36) 00:07:06.649 11897.305 - 11947.717: 64.1708% ( 42) 00:07:06.649 11947.717 - 11998.129: 64.4570% ( 37) 00:07:06.649 11998.129 - 12048.542: 64.7587% ( 39) 00:07:06.649 12048.542 - 12098.954: 65.0449% ( 37) 00:07:06.649 12098.954 - 12149.366: 65.3620% ( 41) 00:07:06.649 12149.366 - 12199.778: 65.7488% ( 50) 00:07:06.649 12199.778 - 12250.191: 66.2206% ( 61) 00:07:06.649 12250.191 - 12300.603: 66.6306% ( 53) 00:07:06.649 12300.603 - 12351.015: 67.0173% ( 50) 00:07:06.649 12351.015 - 12401.428: 67.4118% ( 51) 00:07:06.649 12401.428 - 12451.840: 67.8605% ( 58) 00:07:06.649 12451.840 - 12502.252: 68.3632% ( 65) 00:07:06.649 12502.252 - 12552.665: 68.8196% ( 59) 00:07:06.649 12552.665 - 12603.077: 69.2141% ( 51) 00:07:06.649 12603.077 - 12653.489: 69.6473% ( 56) 00:07:06.649 12653.489 - 12703.902: 70.0495% ( 52) 00:07:06.649 12703.902 - 12754.314: 70.4053% ( 46) 00:07:06.649 12754.314 - 12804.726: 70.7534% ( 45) 00:07:06.649 12804.726 - 12855.138: 71.0938% ( 44) 00:07:06.649 12855.138 - 12905.551: 71.4418% ( 45) 00:07:06.649 12905.551 - 13006.375: 72.2463% ( 104) 00:07:06.649 13006.375 - 13107.200: 72.9734% ( 94) 00:07:06.649 13107.200 - 13208.025: 73.7237% ( 97) 00:07:06.649 13208.025 - 13308.849: 74.5746% ( 110) 00:07:06.649 13308.849 - 13409.674: 75.3713% ( 103) 00:07:06.649 13409.674 - 13510.498: 76.1448% ( 100) 00:07:06.650 13510.498 - 13611.323: 76.9183% ( 100) 00:07:06.650 13611.323 - 13712.148: 77.6300% ( 92) 00:07:06.650 13712.148 - 13812.972: 78.4189% ( 102) 00:07:06.650 13812.972 - 13913.797: 79.4632% ( 135) 00:07:06.650 13913.797 - 14014.622: 80.5384% ( 139) 00:07:06.650 14014.622 - 14115.446: 81.5439% ( 130) 00:07:06.650 14115.446 - 14216.271: 82.4489% ( 117) 00:07:06.650 14216.271 - 14317.095: 83.4158% ( 125) 00:07:06.650 14317.095 - 14417.920: 84.2512% ( 108) 00:07:06.650 14417.920 - 14518.745: 85.1949% ( 122) 00:07:06.650 14518.745 - 14619.569: 86.0922% ( 116) 00:07:06.650 14619.569 - 14720.394: 86.8425% ( 97) 00:07:06.650 14720.394 - 14821.218: 87.4536% ( 79) 00:07:06.650 14821.218 - 14922.043: 88.0337% ( 75) 00:07:06.650 14922.043 - 15022.868: 88.5133% ( 62) 00:07:06.650 15022.868 - 15123.692: 89.0161% ( 65) 00:07:06.650 15123.692 - 15224.517: 89.4647% ( 58) 00:07:06.650 15224.517 - 15325.342: 89.8902% ( 55) 00:07:06.650 15325.342 - 15426.166: 90.3620% ( 61) 00:07:06.650 15426.166 - 15526.991: 90.7256% ( 47) 00:07:06.650 15526.991 - 15627.815: 91.0814% ( 46) 00:07:06.650 15627.815 - 15728.640: 91.4913% ( 53) 00:07:06.650 15728.640 - 15829.465: 91.9477% ( 59) 00:07:06.650 15829.465 - 15930.289: 92.5278% ( 75) 00:07:06.650 15930.289 - 16031.114: 93.0693% ( 70) 00:07:06.650 16031.114 - 16131.938: 93.5179% ( 58) 00:07:06.650 16131.938 - 16232.763: 93.9511% ( 56) 00:07:06.650 16232.763 - 16333.588: 94.3920% ( 57) 00:07:06.650 16333.588 - 16434.412: 94.8871% ( 64) 00:07:06.650 16434.412 - 16535.237: 95.3280% ( 57) 00:07:06.650 16535.237 - 16636.062: 95.7921% ( 60) 00:07:06.650 16636.062 - 16736.886: 96.1711% ( 49) 00:07:06.650 16736.886 - 16837.711: 96.4032% ( 30) 00:07:06.650 16837.711 - 16938.535: 96.5656% ( 21) 00:07:06.650 16938.535 - 17039.360: 96.7899% ( 29) 00:07:06.650 17039.360 - 17140.185: 97.0374% ( 32) 00:07:06.650 17140.185 - 17241.009: 97.3004% ( 34) 00:07:06.650 17241.009 - 17341.834: 97.5093% ( 27) 00:07:06.650 17341.834 - 17442.658: 97.7491% ( 31) 00:07:06.650 17442.658 - 17543.483: 97.9270% ( 23) 00:07:06.650 17543.483 - 17644.308: 98.0972% ( 22) 00:07:06.650 17644.308 - 17745.132: 98.2519% ( 20) 00:07:06.650 17745.132 - 17845.957: 98.3988% ( 19) 00:07:06.650 17845.957 - 17946.782: 98.5458% ( 19) 00:07:06.650 17946.782 - 18047.606: 98.6773% ( 17) 00:07:06.650 18047.606 - 18148.431: 98.7469% ( 9) 00:07:06.650 18148.431 - 18249.255: 98.7933% ( 6) 00:07:06.650 18249.255 - 18350.080: 98.8552% ( 8) 00:07:06.650 18350.080 - 18450.905: 98.9093% ( 7) 00:07:06.650 18450.905 - 18551.729: 98.9712% ( 8) 00:07:06.650 18551.729 - 18652.554: 99.0022% ( 4) 00:07:06.650 18652.554 - 18753.378: 99.0099% ( 1) 00:07:06.650 27625.945 - 27827.594: 99.0563% ( 6) 00:07:06.650 27827.594 - 28029.243: 99.1105% ( 7) 00:07:06.650 28029.243 - 28230.892: 99.1801% ( 9) 00:07:06.650 28230.892 - 28432.542: 99.2420% ( 8) 00:07:06.650 28432.542 - 28634.191: 99.3116% ( 9) 00:07:06.650 28634.191 - 28835.840: 99.3735% ( 8) 00:07:06.650 28835.840 - 29037.489: 99.4431% ( 9) 00:07:06.650 29037.489 - 29239.138: 99.5050% ( 8) 00:07:06.650 35288.615 - 35490.265: 99.5127% ( 1) 00:07:06.650 35490.265 - 35691.914: 99.5668% ( 7) 00:07:06.650 35691.914 - 35893.563: 99.6287% ( 8) 00:07:06.650 35893.563 - 36095.212: 99.6983% ( 9) 00:07:06.650 36095.212 - 36296.862: 99.7602% ( 8) 00:07:06.650 36296.862 - 36498.511: 99.8298% ( 9) 00:07:06.650 36498.511 - 36700.160: 99.8917% ( 8) 00:07:06.650 36700.160 - 36901.809: 99.9536% ( 8) 00:07:06.650 36901.809 - 37103.458: 100.0000% ( 6) 00:07:06.650 00:07:06.650 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:06.650 ============================================================================== 00:07:06.650 Range in us Cumulative IO count 00:07:06.650 5620.972 - 5646.178: 0.0541% ( 7) 00:07:06.650 5646.178 - 5671.385: 0.0774% ( 3) 00:07:06.650 5696.591 - 5721.797: 0.3094% ( 30) 00:07:06.650 5721.797 - 5747.003: 0.6265% ( 41) 00:07:06.650 5747.003 - 5772.209: 0.9978% ( 48) 00:07:06.650 5772.209 - 5797.415: 1.3304% ( 43) 00:07:06.650 5797.415 - 5822.622: 1.8100% ( 62) 00:07:06.650 5822.622 - 5847.828: 2.5913% ( 101) 00:07:06.650 5847.828 - 5873.034: 3.4035% ( 105) 00:07:06.650 5873.034 - 5898.240: 4.3007% ( 116) 00:07:06.650 5898.240 - 5923.446: 5.2599% ( 124) 00:07:06.650 5923.446 - 5948.652: 6.1340% ( 113) 00:07:06.650 5948.652 - 5973.858: 7.2633% ( 146) 00:07:06.650 5973.858 - 5999.065: 8.6788% ( 183) 00:07:06.650 5999.065 - 6024.271: 9.8778% ( 155) 00:07:06.650 6024.271 - 6049.477: 11.0535% ( 152) 00:07:06.650 6049.477 - 6074.683: 12.1442% ( 141) 00:07:06.650 6074.683 - 6099.889: 13.3509% ( 156) 00:07:06.650 6099.889 - 6125.095: 14.6504% ( 168) 00:07:06.650 6125.095 - 6150.302: 15.9808% ( 172) 00:07:06.650 6150.302 - 6175.508: 17.3345% ( 175) 00:07:06.650 6175.508 - 6200.714: 18.7732% ( 186) 00:07:06.650 6200.714 - 6225.920: 20.0727% ( 168) 00:07:06.650 6225.920 - 6251.126: 21.3335% ( 163) 00:07:06.650 6251.126 - 6276.332: 22.6949% ( 176) 00:07:06.650 6276.332 - 6301.538: 24.0176% ( 171) 00:07:06.650 6301.538 - 6326.745: 25.3945% ( 178) 00:07:06.650 6326.745 - 6351.951: 26.8023% ( 182) 00:07:06.650 6351.951 - 6377.157: 28.1095% ( 169) 00:07:06.650 6377.157 - 6402.363: 29.4709% ( 176) 00:07:06.650 6402.363 - 6427.569: 30.8014% ( 172) 00:07:06.650 6427.569 - 6452.775: 32.0390% ( 160) 00:07:06.650 6452.775 - 6503.188: 34.5374% ( 323) 00:07:06.650 6503.188 - 6553.600: 36.8580% ( 300) 00:07:06.650 6553.600 - 6604.012: 38.7222% ( 241) 00:07:06.650 6604.012 - 6654.425: 40.2614% ( 199) 00:07:06.650 6654.425 - 6704.837: 41.6615% ( 181) 00:07:06.650 6704.837 - 6755.249: 42.7444% ( 140) 00:07:06.650 6755.249 - 6805.662: 43.6417% ( 116) 00:07:06.650 6805.662 - 6856.074: 44.3379% ( 90) 00:07:06.650 6856.074 - 6906.486: 44.9876% ( 84) 00:07:06.650 6906.486 - 6956.898: 45.5059% ( 67) 00:07:06.650 6956.898 - 7007.311: 45.9777% ( 61) 00:07:06.650 7007.311 - 7057.723: 46.3954% ( 54) 00:07:06.650 7057.723 - 7108.135: 46.8286% ( 56) 00:07:06.650 7108.135 - 7158.548: 47.2927% ( 60) 00:07:06.650 7158.548 - 7208.960: 47.6485% ( 46) 00:07:06.650 7208.960 - 7259.372: 47.9734% ( 42) 00:07:06.650 7259.372 - 7309.785: 48.2751% ( 39) 00:07:06.650 7309.785 - 7360.197: 48.5458% ( 35) 00:07:06.650 7360.197 - 7410.609: 48.8475% ( 39) 00:07:06.650 7410.609 - 7461.022: 49.0795% ( 30) 00:07:06.650 7461.022 - 7511.434: 49.2497% ( 22) 00:07:06.650 7511.434 - 7561.846: 49.4276% ( 23) 00:07:06.650 7561.846 - 7612.258: 49.5823% ( 20) 00:07:06.650 7612.258 - 7662.671: 49.7757% ( 25) 00:07:06.650 7662.671 - 7713.083: 49.9923% ( 28) 00:07:06.650 7713.083 - 7763.495: 50.2553% ( 34) 00:07:06.650 7763.495 - 7813.908: 50.4873% ( 30) 00:07:06.650 7813.908 - 7864.320: 50.7039% ( 28) 00:07:06.650 7864.320 - 7914.732: 50.9050% ( 26) 00:07:06.650 7914.732 - 7965.145: 51.0907% ( 24) 00:07:06.650 7965.145 - 8015.557: 51.2840% ( 25) 00:07:06.650 8015.557 - 8065.969: 51.4851% ( 26) 00:07:06.650 8065.969 - 8116.382: 51.6631% ( 23) 00:07:06.650 8116.382 - 8166.794: 51.8410% ( 23) 00:07:06.650 8166.794 - 8217.206: 51.9879% ( 19) 00:07:06.650 8217.206 - 8267.618: 52.1426% ( 20) 00:07:06.650 8267.618 - 8318.031: 52.3051% ( 21) 00:07:06.650 8318.031 - 8368.443: 52.4598% ( 20) 00:07:06.650 8368.443 - 8418.855: 52.6609% ( 26) 00:07:06.650 8418.855 - 8469.268: 52.9007% ( 31) 00:07:06.650 8469.268 - 8519.680: 53.1095% ( 27) 00:07:06.650 8519.680 - 8570.092: 53.3184% ( 27) 00:07:06.650 8570.092 - 8620.505: 53.4886% ( 22) 00:07:06.650 8620.505 - 8670.917: 53.6355% ( 19) 00:07:06.650 8670.917 - 8721.329: 53.8057% ( 22) 00:07:06.650 8721.329 - 8771.742: 53.9604% ( 20) 00:07:06.650 8771.742 - 8822.154: 54.1151% ( 20) 00:07:06.650 8822.154 - 8872.566: 54.2621% ( 19) 00:07:06.650 8872.566 - 8922.978: 54.4400% ( 23) 00:07:06.650 8922.978 - 8973.391: 54.6256% ( 24) 00:07:06.650 8973.391 - 9023.803: 54.8035% ( 23) 00:07:06.650 9023.803 - 9074.215: 54.9814% ( 23) 00:07:06.650 9074.215 - 9124.628: 55.1593% ( 23) 00:07:06.650 9124.628 - 9175.040: 55.3373% ( 23) 00:07:06.650 9175.040 - 9225.452: 55.5306% ( 25) 00:07:06.650 9225.452 - 9275.865: 55.7395% ( 27) 00:07:06.650 9275.865 - 9326.277: 55.9483% ( 27) 00:07:06.650 9326.277 - 9376.689: 56.1494% ( 26) 00:07:06.650 9376.689 - 9427.102: 56.3196% ( 22) 00:07:06.650 9427.102 - 9477.514: 56.4666% ( 19) 00:07:06.650 9477.514 - 9527.926: 56.6136% ( 19) 00:07:06.650 9527.926 - 9578.338: 56.7373% ( 16) 00:07:06.650 9578.338 - 9628.751: 56.8688% ( 17) 00:07:06.650 9628.751 - 9679.163: 57.0003% ( 17) 00:07:06.650 9679.163 - 9729.575: 57.1163% ( 15) 00:07:06.650 9729.575 - 9779.988: 57.2401% ( 16) 00:07:06.650 9779.988 - 9830.400: 57.3871% ( 19) 00:07:06.650 9830.400 - 9880.812: 57.5959% ( 27) 00:07:06.650 9880.812 - 9931.225: 57.7584% ( 21) 00:07:06.650 9931.225 - 9981.637: 57.9672% ( 27) 00:07:06.650 9981.637 - 10032.049: 58.1219% ( 20) 00:07:06.650 10032.049 - 10082.462: 58.2689% ( 19) 00:07:06.650 10082.462 - 10132.874: 58.3772% ( 14) 00:07:06.650 10132.874 - 10183.286: 58.4932% ( 15) 00:07:06.650 10183.286 - 10233.698: 58.5938% ( 13) 00:07:06.650 10233.698 - 10284.111: 58.6788% ( 11) 00:07:06.650 10284.111 - 10334.523: 58.8103% ( 17) 00:07:06.650 10334.523 - 10384.935: 59.0114% ( 26) 00:07:06.650 10384.935 - 10435.348: 59.1275% ( 15) 00:07:06.650 10435.348 - 10485.760: 59.2126% ( 11) 00:07:06.650 10485.760 - 10536.172: 59.3363% ( 16) 00:07:06.651 10536.172 - 10586.585: 59.5142% ( 23) 00:07:06.651 10586.585 - 10636.997: 59.6457% ( 17) 00:07:06.651 10636.997 - 10687.409: 59.7850% ( 18) 00:07:06.651 10687.409 - 10737.822: 59.9087% ( 16) 00:07:06.651 10737.822 - 10788.234: 60.0480% ( 18) 00:07:06.651 10788.234 - 10838.646: 60.1872% ( 18) 00:07:06.651 10838.646 - 10889.058: 60.3110% ( 16) 00:07:06.651 10889.058 - 10939.471: 60.4657% ( 20) 00:07:06.651 10939.471 - 10989.883: 60.6281% ( 21) 00:07:06.651 10989.883 - 11040.295: 60.7828% ( 20) 00:07:06.651 11040.295 - 11090.708: 60.9375% ( 20) 00:07:06.651 11090.708 - 11141.120: 61.0999% ( 21) 00:07:06.651 11141.120 - 11191.532: 61.3011% ( 26) 00:07:06.651 11191.532 - 11241.945: 61.4944% ( 25) 00:07:06.651 11241.945 - 11292.357: 61.6337% ( 18) 00:07:06.651 11292.357 - 11342.769: 61.8580% ( 29) 00:07:06.651 11342.769 - 11393.182: 62.0591% ( 26) 00:07:06.651 11393.182 - 11443.594: 62.2602% ( 26) 00:07:06.651 11443.594 - 11494.006: 62.4691% ( 27) 00:07:06.651 11494.006 - 11544.418: 62.6779% ( 27) 00:07:06.651 11544.418 - 11594.831: 62.8713% ( 25) 00:07:06.651 11594.831 - 11645.243: 63.0879% ( 28) 00:07:06.651 11645.243 - 11695.655: 63.2890% ( 26) 00:07:06.651 11695.655 - 11746.068: 63.5133% ( 29) 00:07:06.651 11746.068 - 11796.480: 63.7686% ( 33) 00:07:06.651 11796.480 - 11846.892: 64.0780% ( 40) 00:07:06.651 11846.892 - 11897.305: 64.3874% ( 40) 00:07:06.651 11897.305 - 11947.717: 64.7509% ( 47) 00:07:06.651 11947.717 - 11998.129: 65.0758% ( 42) 00:07:06.651 11998.129 - 12048.542: 65.3852% ( 40) 00:07:06.651 12048.542 - 12098.954: 65.7565% ( 48) 00:07:06.651 12098.954 - 12149.366: 66.1433% ( 50) 00:07:06.651 12149.366 - 12199.778: 66.5687% ( 55) 00:07:06.651 12199.778 - 12250.191: 66.9400% ( 48) 00:07:06.651 12250.191 - 12300.603: 67.3190% ( 49) 00:07:06.651 12300.603 - 12351.015: 67.6980% ( 49) 00:07:06.651 12351.015 - 12401.428: 68.1002% ( 52) 00:07:06.651 12401.428 - 12451.840: 68.4406% ( 44) 00:07:06.651 12451.840 - 12502.252: 68.8428% ( 52) 00:07:06.651 12502.252 - 12552.665: 69.2450% ( 52) 00:07:06.651 12552.665 - 12603.077: 69.6163% ( 48) 00:07:06.651 12603.077 - 12653.489: 70.0418% ( 55) 00:07:06.651 12653.489 - 12703.902: 70.4285% ( 50) 00:07:06.651 12703.902 - 12754.314: 70.9158% ( 63) 00:07:06.651 12754.314 - 12804.726: 71.3567% ( 57) 00:07:06.651 12804.726 - 12855.138: 71.7280% ( 48) 00:07:06.651 12855.138 - 12905.551: 72.0452% ( 41) 00:07:06.651 12905.551 - 13006.375: 72.6795% ( 82) 00:07:06.651 13006.375 - 13107.200: 73.3292% ( 84) 00:07:06.651 13107.200 - 13208.025: 74.1105% ( 101) 00:07:06.651 13208.025 - 13308.849: 74.9304% ( 106) 00:07:06.651 13308.849 - 13409.674: 75.5879% ( 85) 00:07:06.651 13409.674 - 13510.498: 76.3923% ( 104) 00:07:06.651 13510.498 - 13611.323: 77.1581% ( 99) 00:07:06.651 13611.323 - 13712.148: 77.9007% ( 96) 00:07:06.651 13712.148 - 13812.972: 78.5504% ( 84) 00:07:06.651 13812.972 - 13913.797: 79.3085% ( 98) 00:07:06.651 13913.797 - 14014.622: 80.0665% ( 98) 00:07:06.651 14014.622 - 14115.446: 80.7472% ( 88) 00:07:06.651 14115.446 - 14216.271: 81.4047% ( 85) 00:07:06.651 14216.271 - 14317.095: 82.1009% ( 90) 00:07:06.651 14317.095 - 14417.920: 82.8202% ( 93) 00:07:06.651 14417.920 - 14518.745: 83.5164% ( 90) 00:07:06.651 14518.745 - 14619.569: 84.3286% ( 105) 00:07:06.651 14619.569 - 14720.394: 85.1717% ( 109) 00:07:06.651 14720.394 - 14821.218: 86.0690% ( 116) 00:07:06.651 14821.218 - 14922.043: 86.8967% ( 107) 00:07:06.651 14922.043 - 15022.868: 87.6856% ( 102) 00:07:06.651 15022.868 - 15123.692: 88.4205% ( 95) 00:07:06.651 15123.692 - 15224.517: 89.1244% ( 91) 00:07:06.651 15224.517 - 15325.342: 89.8670% ( 96) 00:07:06.651 15325.342 - 15426.166: 90.6714% ( 104) 00:07:06.651 15426.166 - 15526.991: 91.4140% ( 96) 00:07:06.651 15526.991 - 15627.815: 92.0251% ( 79) 00:07:06.651 15627.815 - 15728.640: 92.5588% ( 69) 00:07:06.651 15728.640 - 15829.465: 92.9610% ( 52) 00:07:06.651 15829.465 - 15930.289: 93.3400% ( 49) 00:07:06.651 15930.289 - 16031.114: 93.6494% ( 40) 00:07:06.651 16031.114 - 16131.938: 93.9511% ( 39) 00:07:06.651 16131.938 - 16232.763: 94.2373% ( 37) 00:07:06.651 16232.763 - 16333.588: 94.5003% ( 34) 00:07:06.651 16333.588 - 16434.412: 94.7865% ( 37) 00:07:06.651 16434.412 - 16535.237: 95.1269% ( 44) 00:07:06.651 16535.237 - 16636.062: 95.4285% ( 39) 00:07:06.651 16636.062 - 16736.886: 95.6374% ( 27) 00:07:06.651 16736.886 - 16837.711: 95.8772% ( 31) 00:07:06.651 16837.711 - 16938.535: 96.1324% ( 33) 00:07:06.651 16938.535 - 17039.360: 96.3954% ( 34) 00:07:06.651 17039.360 - 17140.185: 96.7358% ( 44) 00:07:06.651 17140.185 - 17241.009: 97.1071% ( 48) 00:07:06.651 17241.009 - 17341.834: 97.4165% ( 40) 00:07:06.651 17341.834 - 17442.658: 97.7027% ( 37) 00:07:06.651 17442.658 - 17543.483: 97.9734% ( 35) 00:07:06.651 17543.483 - 17644.308: 98.1900% ( 28) 00:07:06.651 17644.308 - 17745.132: 98.3679% ( 23) 00:07:06.651 17745.132 - 17845.957: 98.4994% ( 17) 00:07:06.651 17845.957 - 17946.782: 98.6463% ( 19) 00:07:06.651 17946.782 - 18047.606: 98.7701% ( 16) 00:07:06.651 18047.606 - 18148.431: 98.8552% ( 11) 00:07:06.651 18148.431 - 18249.255: 98.9325% ( 10) 00:07:06.651 18249.255 - 18350.080: 98.9790% ( 6) 00:07:06.651 18350.080 - 18450.905: 99.0022% ( 3) 00:07:06.651 18450.905 - 18551.729: 99.0099% ( 1) 00:07:06.651 27020.997 - 27222.646: 99.0254% ( 2) 00:07:06.651 27222.646 - 27424.295: 99.0950% ( 9) 00:07:06.651 27424.295 - 27625.945: 99.1491% ( 7) 00:07:06.651 27625.945 - 27827.594: 99.2110% ( 8) 00:07:06.651 27827.594 - 28029.243: 99.2652% ( 7) 00:07:06.651 28029.243 - 28230.892: 99.3270% ( 8) 00:07:06.651 28230.892 - 28432.542: 99.3889% ( 8) 00:07:06.651 28432.542 - 28634.191: 99.4585% ( 9) 00:07:06.651 28634.191 - 28835.840: 99.5050% ( 6) 00:07:06.651 34482.018 - 34683.668: 99.5514% ( 6) 00:07:06.651 34683.668 - 34885.317: 99.6055% ( 7) 00:07:06.651 34885.317 - 35086.966: 99.6597% ( 7) 00:07:06.651 35086.966 - 35288.615: 99.7138% ( 7) 00:07:06.651 35288.615 - 35490.265: 99.7757% ( 8) 00:07:06.651 35490.265 - 35691.914: 99.8376% ( 8) 00:07:06.651 35691.914 - 35893.563: 99.8994% ( 8) 00:07:06.651 35893.563 - 36095.212: 99.9691% ( 9) 00:07:06.651 36095.212 - 36296.862: 100.0000% ( 4) 00:07:06.651 00:07:06.651 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:06.651 ============================================================================== 00:07:06.651 Range in us Cumulative IO count 00:07:06.651 5570.560 - 5595.766: 0.0077% ( 1) 00:07:06.651 5595.766 - 5620.972: 0.0309% ( 3) 00:07:06.651 5620.972 - 5646.178: 0.0928% ( 8) 00:07:06.651 5646.178 - 5671.385: 0.1702% ( 10) 00:07:06.651 5671.385 - 5696.591: 0.2321% ( 8) 00:07:06.651 5696.591 - 5721.797: 0.3326% ( 13) 00:07:06.651 5721.797 - 5747.003: 0.7271% ( 51) 00:07:06.651 5747.003 - 5772.209: 1.1061% ( 49) 00:07:06.651 5772.209 - 5797.415: 1.4697% ( 47) 00:07:06.651 5797.415 - 5822.622: 2.2587% ( 102) 00:07:06.651 5822.622 - 5847.828: 3.0012% ( 96) 00:07:06.651 5847.828 - 5873.034: 3.6819% ( 88) 00:07:06.651 5873.034 - 5898.240: 4.2930% ( 79) 00:07:06.651 5898.240 - 5923.446: 5.1052% ( 105) 00:07:06.651 5923.446 - 5948.652: 6.2577% ( 149) 00:07:06.651 5948.652 - 5973.858: 7.3175% ( 137) 00:07:06.651 5973.858 - 5999.065: 8.5860% ( 164) 00:07:06.651 5999.065 - 6024.271: 9.7076% ( 145) 00:07:06.651 6024.271 - 6049.477: 10.9298% ( 158) 00:07:06.651 6049.477 - 6074.683: 12.0282% ( 142) 00:07:06.651 6074.683 - 6099.889: 13.2658% ( 160) 00:07:06.651 6099.889 - 6125.095: 14.5343% ( 164) 00:07:06.651 6125.095 - 6150.302: 15.7488% ( 157) 00:07:06.651 6150.302 - 6175.508: 17.0792% ( 172) 00:07:06.651 6175.508 - 6200.714: 18.3014% ( 158) 00:07:06.651 6200.714 - 6225.920: 19.5777% ( 165) 00:07:06.651 6225.920 - 6251.126: 20.8230% ( 161) 00:07:06.651 6251.126 - 6276.332: 22.1380% ( 170) 00:07:06.651 6276.332 - 6301.538: 23.4375% ( 168) 00:07:06.651 6301.538 - 6326.745: 24.8298% ( 180) 00:07:06.651 6326.745 - 6351.951: 26.2067% ( 178) 00:07:06.651 6351.951 - 6377.157: 27.5449% ( 173) 00:07:06.651 6377.157 - 6402.363: 28.9140% ( 177) 00:07:06.651 6402.363 - 6427.569: 30.1516% ( 160) 00:07:06.651 6427.569 - 6452.775: 31.4666% ( 170) 00:07:06.651 6452.775 - 6503.188: 33.9341% ( 319) 00:07:06.651 6503.188 - 6553.600: 36.1773% ( 290) 00:07:06.651 6553.600 - 6604.012: 38.1575% ( 256) 00:07:06.651 6604.012 - 6654.425: 39.8360% ( 217) 00:07:06.651 6654.425 - 6704.837: 41.3212% ( 192) 00:07:06.651 6704.837 - 6755.249: 42.4737% ( 149) 00:07:06.651 6755.249 - 6805.662: 43.4329% ( 124) 00:07:06.651 6805.662 - 6856.074: 44.2373% ( 104) 00:07:06.652 6856.074 - 6906.486: 44.9103% ( 87) 00:07:06.652 6906.486 - 6956.898: 45.5832% ( 87) 00:07:06.652 6956.898 - 7007.311: 46.1402% ( 72) 00:07:06.652 7007.311 - 7057.723: 46.7280% ( 76) 00:07:06.652 7057.723 - 7108.135: 47.3159% ( 76) 00:07:06.652 7108.135 - 7158.548: 47.8032% ( 63) 00:07:06.652 7158.548 - 7208.960: 48.2209% ( 54) 00:07:06.652 7208.960 - 7259.372: 48.6077% ( 50) 00:07:06.652 7259.372 - 7309.785: 48.9790% ( 48) 00:07:06.652 7309.785 - 7360.197: 49.2342% ( 33) 00:07:06.652 7360.197 - 7410.609: 49.4431% ( 27) 00:07:06.652 7410.609 - 7461.022: 49.6132% ( 22) 00:07:06.652 7461.022 - 7511.434: 49.7834% ( 22) 00:07:06.652 7511.434 - 7561.846: 49.9691% ( 24) 00:07:06.652 7561.846 - 7612.258: 50.1547% ( 24) 00:07:06.652 7612.258 - 7662.671: 50.3326% ( 23) 00:07:06.652 7662.671 - 7713.083: 50.4950% ( 21) 00:07:06.652 7713.083 - 7763.495: 50.6730% ( 23) 00:07:06.652 7763.495 - 7813.908: 50.8509% ( 23) 00:07:06.652 7813.908 - 7864.320: 51.0442% ( 25) 00:07:06.652 7864.320 - 7914.732: 51.2144% ( 22) 00:07:06.652 7914.732 - 7965.145: 51.3614% ( 19) 00:07:06.652 7965.145 - 8015.557: 51.5161% ( 20) 00:07:06.652 8015.557 - 8065.969: 51.6785% ( 21) 00:07:06.652 8065.969 - 8116.382: 51.8564% ( 23) 00:07:06.652 8116.382 - 8166.794: 52.0189% ( 21) 00:07:06.652 8166.794 - 8217.206: 52.2123% ( 25) 00:07:06.652 8217.206 - 8267.618: 52.3360% ( 16) 00:07:06.652 8267.618 - 8318.031: 52.4598% ( 16) 00:07:06.652 8318.031 - 8368.443: 52.6222% ( 21) 00:07:06.652 8368.443 - 8418.855: 52.7924% ( 22) 00:07:06.652 8418.855 - 8469.268: 52.9858% ( 25) 00:07:06.652 8469.268 - 8519.680: 53.1946% ( 27) 00:07:06.652 8519.680 - 8570.092: 53.3957% ( 26) 00:07:06.652 8570.092 - 8620.505: 53.5736% ( 23) 00:07:06.652 8620.505 - 8670.917: 53.7515% ( 23) 00:07:06.652 8670.917 - 8721.329: 53.9372% ( 24) 00:07:06.652 8721.329 - 8771.742: 54.1306% ( 25) 00:07:06.652 8771.742 - 8822.154: 54.3007% ( 22) 00:07:06.652 8822.154 - 8872.566: 54.5019% ( 26) 00:07:06.652 8872.566 - 8922.978: 54.6566% ( 20) 00:07:06.652 8922.978 - 8973.391: 54.8267% ( 22) 00:07:06.652 8973.391 - 9023.803: 55.0046% ( 23) 00:07:06.652 9023.803 - 9074.215: 55.2522% ( 32) 00:07:06.652 9074.215 - 9124.628: 55.4688% ( 28) 00:07:06.652 9124.628 - 9175.040: 55.6621% ( 25) 00:07:06.652 9175.040 - 9225.452: 55.8787% ( 28) 00:07:06.652 9225.452 - 9275.865: 56.0798% ( 26) 00:07:06.652 9275.865 - 9326.277: 56.2423% ( 21) 00:07:06.652 9326.277 - 9376.689: 56.4047% ( 21) 00:07:06.652 9376.689 - 9427.102: 56.5517% ( 19) 00:07:06.652 9427.102 - 9477.514: 56.7218% ( 22) 00:07:06.652 9477.514 - 9527.926: 56.8765% ( 20) 00:07:06.652 9527.926 - 9578.338: 57.0312% ( 20) 00:07:06.652 9578.338 - 9628.751: 57.1550% ( 16) 00:07:06.652 9628.751 - 9679.163: 57.2633% ( 14) 00:07:06.652 9679.163 - 9729.575: 57.3639% ( 13) 00:07:06.652 9729.575 - 9779.988: 57.4567% ( 12) 00:07:06.652 9779.988 - 9830.400: 57.5418% ( 11) 00:07:06.652 9830.400 - 9880.812: 57.6501% ( 14) 00:07:06.652 9880.812 - 9931.225: 57.7584% ( 14) 00:07:06.652 9931.225 - 9981.637: 57.8744% ( 15) 00:07:06.652 9981.637 - 10032.049: 57.9595% ( 11) 00:07:06.652 10032.049 - 10082.462: 58.0987% ( 18) 00:07:06.652 10082.462 - 10132.874: 58.2302% ( 17) 00:07:06.652 10132.874 - 10183.286: 58.3462% ( 15) 00:07:06.652 10183.286 - 10233.698: 58.4468% ( 13) 00:07:06.652 10233.698 - 10284.111: 58.5473% ( 13) 00:07:06.652 10284.111 - 10334.523: 58.6479% ( 13) 00:07:06.652 10334.523 - 10384.935: 58.7794% ( 17) 00:07:06.652 10384.935 - 10435.348: 58.9109% ( 17) 00:07:06.652 10435.348 - 10485.760: 59.0656% ( 20) 00:07:06.652 10485.760 - 10536.172: 59.1816% ( 15) 00:07:06.652 10536.172 - 10586.585: 59.2976% ( 15) 00:07:06.652 10586.585 - 10636.997: 59.4059% ( 14) 00:07:06.652 10636.997 - 10687.409: 59.5220% ( 15) 00:07:06.652 10687.409 - 10737.822: 59.6612% ( 18) 00:07:06.652 10737.822 - 10788.234: 59.8082% ( 19) 00:07:06.652 10788.234 - 10838.646: 59.9319% ( 16) 00:07:06.652 10838.646 - 10889.058: 60.1098% ( 23) 00:07:06.652 10889.058 - 10939.471: 60.2800% ( 22) 00:07:06.652 10939.471 - 10989.883: 60.4579% ( 23) 00:07:06.652 10989.883 - 11040.295: 60.6822% ( 29) 00:07:06.652 11040.295 - 11090.708: 60.9375% ( 33) 00:07:06.652 11090.708 - 11141.120: 61.1928% ( 33) 00:07:06.652 11141.120 - 11191.532: 61.4403% ( 32) 00:07:06.652 11191.532 - 11241.945: 61.7033% ( 34) 00:07:06.652 11241.945 - 11292.357: 61.9353% ( 30) 00:07:06.652 11292.357 - 11342.769: 62.1674% ( 30) 00:07:06.652 11342.769 - 11393.182: 62.4304% ( 34) 00:07:06.652 11393.182 - 11443.594: 62.6856% ( 33) 00:07:06.652 11443.594 - 11494.006: 62.9873% ( 39) 00:07:06.652 11494.006 - 11544.418: 63.3199% ( 43) 00:07:06.652 11544.418 - 11594.831: 63.6371% ( 41) 00:07:06.652 11594.831 - 11645.243: 63.9851% ( 45) 00:07:06.652 11645.243 - 11695.655: 64.3100% ( 42) 00:07:06.652 11695.655 - 11746.068: 64.5808% ( 35) 00:07:06.652 11746.068 - 11796.480: 64.8128% ( 30) 00:07:06.652 11796.480 - 11846.892: 65.1067% ( 38) 00:07:06.652 11846.892 - 11897.305: 65.3697% ( 34) 00:07:06.652 11897.305 - 11947.717: 65.6405% ( 35) 00:07:06.652 11947.717 - 11998.129: 65.8957% ( 33) 00:07:06.652 11998.129 - 12048.542: 66.1123% ( 28) 00:07:06.652 12048.542 - 12098.954: 66.4372% ( 42) 00:07:06.652 12098.954 - 12149.366: 66.7543% ( 41) 00:07:06.652 12149.366 - 12199.778: 67.1256% ( 48) 00:07:06.652 12199.778 - 12250.191: 67.5278% ( 52) 00:07:06.652 12250.191 - 12300.603: 67.8527% ( 42) 00:07:06.652 12300.603 - 12351.015: 68.2085% ( 46) 00:07:06.652 12351.015 - 12401.428: 68.6262% ( 54) 00:07:06.652 12401.428 - 12451.840: 68.9588% ( 43) 00:07:06.652 12451.840 - 12502.252: 69.3843% ( 55) 00:07:06.652 12502.252 - 12552.665: 69.7633% ( 49) 00:07:06.652 12552.665 - 12603.077: 70.1810% ( 54) 00:07:06.652 12603.077 - 12653.489: 70.5755% ( 51) 00:07:06.652 12653.489 - 12703.902: 71.0009% ( 55) 00:07:06.652 12703.902 - 12754.314: 71.4341% ( 56) 00:07:06.652 12754.314 - 12804.726: 71.8518% ( 54) 00:07:06.652 12804.726 - 12855.138: 72.1844% ( 43) 00:07:06.652 12855.138 - 12905.551: 72.5248% ( 44) 00:07:06.652 12905.551 - 13006.375: 73.1281% ( 78) 00:07:06.652 13006.375 - 13107.200: 73.6696% ( 70) 00:07:06.652 13107.200 - 13208.025: 74.1182% ( 58) 00:07:06.652 13208.025 - 13308.849: 74.5823% ( 60) 00:07:06.652 13308.849 - 13409.674: 75.0387% ( 59) 00:07:06.652 13409.674 - 13510.498: 75.7735% ( 95) 00:07:06.652 13510.498 - 13611.323: 76.4233% ( 84) 00:07:06.652 13611.323 - 13712.148: 77.0808% ( 85) 00:07:06.652 13712.148 - 13812.972: 77.8465% ( 99) 00:07:06.652 13812.972 - 13913.797: 78.6897% ( 109) 00:07:06.652 13913.797 - 14014.622: 79.5637% ( 113) 00:07:06.652 14014.622 - 14115.446: 80.4146% ( 110) 00:07:06.652 14115.446 - 14216.271: 81.1959% ( 101) 00:07:06.652 14216.271 - 14317.095: 81.9926% ( 103) 00:07:06.652 14317.095 - 14417.920: 82.7351% ( 96) 00:07:06.652 14417.920 - 14518.745: 83.4468% ( 92) 00:07:06.652 14518.745 - 14619.569: 84.2203% ( 100) 00:07:06.652 14619.569 - 14720.394: 85.0170% ( 103) 00:07:06.652 14720.394 - 14821.218: 85.6822% ( 86) 00:07:06.652 14821.218 - 14922.043: 86.4325% ( 97) 00:07:06.652 14922.043 - 15022.868: 87.1983% ( 99) 00:07:06.652 15022.868 - 15123.692: 88.0337% ( 108) 00:07:06.652 15123.692 - 15224.517: 88.8769% ( 109) 00:07:06.652 15224.517 - 15325.342: 89.6968% ( 106) 00:07:06.652 15325.342 - 15426.166: 90.5244% ( 107) 00:07:06.652 15426.166 - 15526.991: 91.2670% ( 96) 00:07:06.652 15526.991 - 15627.815: 91.9477% ( 88) 00:07:06.652 15627.815 - 15728.640: 92.5511% ( 78) 00:07:06.652 15728.640 - 15829.465: 92.9610% ( 53) 00:07:06.652 15829.465 - 15930.289: 93.3091% ( 45) 00:07:06.652 15930.289 - 16031.114: 93.6649% ( 46) 00:07:06.652 16031.114 - 16131.938: 93.9588% ( 38) 00:07:06.652 16131.938 - 16232.763: 94.3069% ( 45) 00:07:06.652 16232.763 - 16333.588: 94.5931% ( 37) 00:07:06.652 16333.588 - 16434.412: 94.9489% ( 46) 00:07:06.652 16434.412 - 16535.237: 95.2584% ( 40) 00:07:06.652 16535.237 - 16636.062: 95.5987% ( 44) 00:07:06.652 16636.062 - 16736.886: 95.8385% ( 31) 00:07:06.652 16736.886 - 16837.711: 96.0628% ( 29) 00:07:06.652 16837.711 - 16938.535: 96.2717% ( 27) 00:07:06.652 16938.535 - 17039.360: 96.4805% ( 27) 00:07:06.652 17039.360 - 17140.185: 96.6894% ( 27) 00:07:06.652 17140.185 - 17241.009: 96.8286% ( 18) 00:07:06.652 17241.009 - 17341.834: 97.0297% ( 26) 00:07:06.652 17341.834 - 17442.658: 97.2386% ( 27) 00:07:06.652 17442.658 - 17543.483: 97.4319% ( 25) 00:07:06.652 17543.483 - 17644.308: 97.6098% ( 23) 00:07:06.652 17644.308 - 17745.132: 97.8110% ( 26) 00:07:06.652 17745.132 - 17845.957: 98.0353% ( 29) 00:07:06.652 17845.957 - 17946.782: 98.2519% ( 28) 00:07:06.652 17946.782 - 18047.606: 98.4452% ( 25) 00:07:06.653 18047.606 - 18148.431: 98.6154% ( 22) 00:07:06.653 18148.431 - 18249.255: 98.7469% ( 17) 00:07:06.653 18249.255 - 18350.080: 98.8552% ( 14) 00:07:06.653 18350.080 - 18450.905: 98.9248% ( 9) 00:07:06.653 18450.905 - 18551.729: 98.9790% ( 7) 00:07:06.653 18551.729 - 18652.554: 99.0099% ( 4) 00:07:06.653 26012.751 - 26214.400: 99.0408% ( 4) 00:07:06.653 26214.400 - 26416.049: 99.0950% ( 7) 00:07:06.653 26416.049 - 26617.698: 99.1569% ( 8) 00:07:06.653 26617.698 - 26819.348: 99.2188% ( 8) 00:07:06.653 26819.348 - 27020.997: 99.2884% ( 9) 00:07:06.653 27020.997 - 27222.646: 99.3502% ( 8) 00:07:06.653 27222.646 - 27424.295: 99.4121% ( 8) 00:07:06.653 27424.295 - 27625.945: 99.4740% ( 8) 00:07:06.653 27625.945 - 27827.594: 99.5050% ( 4) 00:07:06.653 33070.474 - 33272.123: 99.5668% ( 8) 00:07:06.653 33272.123 - 33473.772: 99.6364% ( 9) 00:07:06.653 33473.772 - 33675.422: 99.6983% ( 8) 00:07:06.653 33675.422 - 33877.071: 99.7602% ( 8) 00:07:06.653 33877.071 - 34078.720: 99.8221% ( 8) 00:07:06.653 34078.720 - 34280.369: 99.8840% ( 8) 00:07:06.653 34280.369 - 34482.018: 99.9304% ( 6) 00:07:06.653 34482.018 - 34683.668: 100.0000% ( 9) 00:07:06.653 00:07:06.653 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:06.653 ============================================================================== 00:07:06.653 Range in us Cumulative IO count 00:07:06.653 5595.766 - 5620.972: 0.0155% ( 2) 00:07:06.653 5620.972 - 5646.178: 0.0387% ( 3) 00:07:06.653 5646.178 - 5671.385: 0.1934% ( 20) 00:07:06.653 5671.385 - 5696.591: 0.3558% ( 21) 00:07:06.653 5696.591 - 5721.797: 0.5569% ( 26) 00:07:06.653 5721.797 - 5747.003: 0.7890% ( 30) 00:07:06.653 5747.003 - 5772.209: 0.9824% ( 25) 00:07:06.653 5772.209 - 5797.415: 1.3227% ( 44) 00:07:06.653 5797.415 - 5822.622: 1.7946% ( 61) 00:07:06.653 5822.622 - 5847.828: 2.4443% ( 84) 00:07:06.653 5847.828 - 5873.034: 3.4267% ( 127) 00:07:06.653 5873.034 - 5898.240: 4.4322% ( 130) 00:07:06.653 5898.240 - 5923.446: 5.4146% ( 127) 00:07:06.653 5923.446 - 5948.652: 6.4047% ( 128) 00:07:06.653 5948.652 - 5973.858: 7.3329% ( 120) 00:07:06.653 5973.858 - 5999.065: 8.4700% ( 147) 00:07:06.653 5999.065 - 6024.271: 9.5220% ( 136) 00:07:06.653 6024.271 - 6049.477: 10.6822% ( 150) 00:07:06.653 6049.477 - 6074.683: 11.9121% ( 159) 00:07:06.653 6074.683 - 6099.889: 13.2039% ( 167) 00:07:06.653 6099.889 - 6125.095: 14.4261% ( 158) 00:07:06.653 6125.095 - 6150.302: 15.7642% ( 173) 00:07:06.653 6150.302 - 6175.508: 17.0328% ( 164) 00:07:06.653 6175.508 - 6200.714: 18.3168% ( 166) 00:07:06.653 6200.714 - 6225.920: 19.6473% ( 172) 00:07:06.653 6225.920 - 6251.126: 20.9158% ( 164) 00:07:06.653 6251.126 - 6276.332: 22.2153% ( 168) 00:07:06.653 6276.332 - 6301.538: 23.6231% ( 182) 00:07:06.653 6301.538 - 6326.745: 25.0541% ( 185) 00:07:06.653 6326.745 - 6351.951: 26.5316% ( 191) 00:07:06.653 6351.951 - 6377.157: 27.8388% ( 169) 00:07:06.653 6377.157 - 6402.363: 29.1925% ( 175) 00:07:06.653 6402.363 - 6427.569: 30.4842% ( 167) 00:07:06.653 6427.569 - 6452.775: 31.7528% ( 164) 00:07:06.653 6452.775 - 6503.188: 34.0888% ( 302) 00:07:06.653 6503.188 - 6553.600: 36.3629% ( 294) 00:07:06.653 6553.600 - 6604.012: 38.2116% ( 239) 00:07:06.653 6604.012 - 6654.425: 39.9830% ( 229) 00:07:06.653 6654.425 - 6704.837: 41.3057% ( 171) 00:07:06.653 6704.837 - 6755.249: 42.4428% ( 147) 00:07:06.653 6755.249 - 6805.662: 43.4561% ( 131) 00:07:06.653 6805.662 - 6856.074: 44.3379% ( 114) 00:07:06.653 6856.074 - 6906.486: 45.1037% ( 99) 00:07:06.653 6906.486 - 6956.898: 45.8540% ( 97) 00:07:06.653 6956.898 - 7007.311: 46.4650% ( 79) 00:07:06.653 7007.311 - 7057.723: 47.0916% ( 81) 00:07:06.653 7057.723 - 7108.135: 47.5944% ( 65) 00:07:06.653 7108.135 - 7158.548: 47.9657% ( 48) 00:07:06.653 7158.548 - 7208.960: 48.3756% ( 53) 00:07:06.653 7208.960 - 7259.372: 48.7392% ( 47) 00:07:06.653 7259.372 - 7309.785: 49.0408% ( 39) 00:07:06.653 7309.785 - 7360.197: 49.3116% ( 35) 00:07:06.653 7360.197 - 7410.609: 49.5746% ( 34) 00:07:06.653 7410.609 - 7461.022: 49.8298% ( 33) 00:07:06.653 7461.022 - 7511.434: 50.0774% ( 32) 00:07:06.653 7511.434 - 7561.846: 50.3171% ( 31) 00:07:06.653 7561.846 - 7612.258: 50.5105% ( 25) 00:07:06.653 7612.258 - 7662.671: 50.6807% ( 22) 00:07:06.653 7662.671 - 7713.083: 50.8199% ( 18) 00:07:06.653 7713.083 - 7763.495: 50.9437% ( 16) 00:07:06.653 7763.495 - 7813.908: 51.0907% ( 19) 00:07:06.653 7813.908 - 7864.320: 51.1835% ( 12) 00:07:06.653 7864.320 - 7914.732: 51.2840% ( 13) 00:07:06.653 7914.732 - 7965.145: 51.4001% ( 15) 00:07:06.653 7965.145 - 8015.557: 51.5006% ( 13) 00:07:06.653 8015.557 - 8065.969: 51.6089% ( 14) 00:07:06.653 8065.969 - 8116.382: 51.6940% ( 11) 00:07:06.653 8116.382 - 8166.794: 51.8100% ( 15) 00:07:06.653 8166.794 - 8217.206: 51.8874% ( 10) 00:07:06.653 8217.206 - 8267.618: 52.0498% ( 21) 00:07:06.653 8267.618 - 8318.031: 52.1968% ( 19) 00:07:06.653 8318.031 - 8368.443: 52.4985% ( 39) 00:07:06.653 8368.443 - 8418.855: 52.7460% ( 32) 00:07:06.653 8418.855 - 8469.268: 52.9084% ( 21) 00:07:06.653 8469.268 - 8519.680: 53.0786% ( 22) 00:07:06.653 8519.680 - 8570.092: 53.2874% ( 27) 00:07:06.653 8570.092 - 8620.505: 53.4963% ( 27) 00:07:06.653 8620.505 - 8670.917: 53.7283% ( 30) 00:07:06.653 8670.917 - 8721.329: 53.9295% ( 26) 00:07:06.653 8721.329 - 8771.742: 54.1383% ( 27) 00:07:06.653 8771.742 - 8822.154: 54.3394% ( 26) 00:07:06.653 8822.154 - 8872.566: 54.5637% ( 29) 00:07:06.653 8872.566 - 8922.978: 54.7726% ( 27) 00:07:06.653 8922.978 - 8973.391: 54.9582% ( 24) 00:07:06.653 8973.391 - 9023.803: 55.1361% ( 23) 00:07:06.653 9023.803 - 9074.215: 55.3140% ( 23) 00:07:06.653 9074.215 - 9124.628: 55.4610% ( 19) 00:07:06.653 9124.628 - 9175.040: 55.5848% ( 16) 00:07:06.653 9175.040 - 9225.452: 55.6776% ( 12) 00:07:06.653 9225.452 - 9275.865: 55.7627% ( 11) 00:07:06.653 9275.865 - 9326.277: 55.9406% ( 23) 00:07:06.653 9326.277 - 9376.689: 56.0644% ( 16) 00:07:06.653 9376.689 - 9427.102: 56.1726% ( 14) 00:07:06.653 9427.102 - 9477.514: 56.2887% ( 15) 00:07:06.653 9477.514 - 9527.926: 56.4124% ( 16) 00:07:06.653 9527.926 - 9578.338: 56.5207% ( 14) 00:07:06.653 9578.338 - 9628.751: 56.6136% ( 12) 00:07:06.653 9628.751 - 9679.163: 56.7141% ( 13) 00:07:06.653 9679.163 - 9729.575: 56.7915% ( 10) 00:07:06.653 9729.575 - 9779.988: 56.8998% ( 14) 00:07:06.653 9779.988 - 9830.400: 57.0622% ( 21) 00:07:06.653 9830.400 - 9880.812: 57.1627% ( 13) 00:07:06.653 9880.812 - 9931.225: 57.2633% ( 13) 00:07:06.653 9931.225 - 9981.637: 57.3793% ( 15) 00:07:06.653 9981.637 - 10032.049: 57.5882% ( 27) 00:07:06.653 10032.049 - 10082.462: 57.8125% ( 29) 00:07:06.653 10082.462 - 10132.874: 58.0446% ( 30) 00:07:06.653 10132.874 - 10183.286: 58.1374% ( 12) 00:07:06.653 10183.286 - 10233.698: 58.2379% ( 13) 00:07:06.653 10233.698 - 10284.111: 58.3540% ( 15) 00:07:06.653 10284.111 - 10334.523: 58.4855% ( 17) 00:07:06.653 10334.523 - 10384.935: 58.6402% ( 20) 00:07:06.653 10384.935 - 10435.348: 58.9109% ( 35) 00:07:06.653 10435.348 - 10485.760: 59.1275% ( 28) 00:07:06.653 10485.760 - 10536.172: 59.3286% ( 26) 00:07:06.653 10536.172 - 10586.585: 59.5606% ( 30) 00:07:06.653 10586.585 - 10636.997: 59.7927% ( 30) 00:07:06.653 10636.997 - 10687.409: 60.0557% ( 34) 00:07:06.653 10687.409 - 10737.822: 60.2877% ( 30) 00:07:06.653 10737.822 - 10788.234: 60.5585% ( 35) 00:07:06.653 10788.234 - 10838.646: 60.7905% ( 30) 00:07:06.653 10838.646 - 10889.058: 61.0458% ( 33) 00:07:06.653 10889.058 - 10939.471: 61.2701% ( 29) 00:07:06.653 10939.471 - 10989.883: 61.5022% ( 30) 00:07:06.653 10989.883 - 11040.295: 61.6878% ( 24) 00:07:06.653 11040.295 - 11090.708: 61.8812% ( 25) 00:07:06.653 11090.708 - 11141.120: 62.0514% ( 22) 00:07:06.653 11141.120 - 11191.532: 62.2370% ( 24) 00:07:06.653 11191.532 - 11241.945: 62.3994% ( 21) 00:07:06.653 11241.945 - 11292.357: 62.5696% ( 22) 00:07:06.653 11292.357 - 11342.769: 62.7707% ( 26) 00:07:06.653 11342.769 - 11393.182: 63.0260% ( 33) 00:07:06.653 11393.182 - 11443.594: 63.1962% ( 22) 00:07:06.653 11443.594 - 11494.006: 63.3354% ( 18) 00:07:06.653 11494.006 - 11544.418: 63.4824% ( 19) 00:07:06.653 11544.418 - 11594.831: 63.6061% ( 16) 00:07:06.653 11594.831 - 11645.243: 63.7918% ( 24) 00:07:06.653 11645.243 - 11695.655: 63.9387% ( 19) 00:07:06.653 11695.655 - 11746.068: 64.1012% ( 21) 00:07:06.653 11746.068 - 11796.480: 64.3023% ( 26) 00:07:06.653 11796.480 - 11846.892: 64.4725% ( 22) 00:07:06.653 11846.892 - 11897.305: 64.6890% ( 28) 00:07:06.653 11897.305 - 11947.717: 64.9443% ( 33) 00:07:06.653 11947.717 - 11998.129: 65.1764% ( 30) 00:07:06.653 11998.129 - 12048.542: 65.4239% ( 32) 00:07:06.653 12048.542 - 12098.954: 65.7024% ( 36) 00:07:06.653 12098.954 - 12149.366: 66.0504% ( 45) 00:07:06.653 12149.366 - 12199.778: 66.3676% ( 41) 00:07:06.653 12199.778 - 12250.191: 66.6538% ( 37) 00:07:06.653 12250.191 - 12300.603: 66.9168% ( 34) 00:07:06.653 12300.603 - 12351.015: 67.1720% ( 33) 00:07:06.653 12351.015 - 12401.428: 67.4118% ( 31) 00:07:06.653 12401.428 - 12451.840: 67.6748% ( 34) 00:07:06.653 12451.840 - 12502.252: 68.0384% ( 47) 00:07:06.653 12502.252 - 12552.665: 68.4561% ( 54) 00:07:06.653 12552.665 - 12603.077: 68.8815% ( 55) 00:07:06.653 12603.077 - 12653.489: 69.2373% ( 46) 00:07:06.653 12653.489 - 12703.902: 69.6241% ( 50) 00:07:06.653 12703.902 - 12754.314: 70.0108% ( 50) 00:07:06.654 12754.314 - 12804.726: 70.3666% ( 46) 00:07:06.654 12804.726 - 12855.138: 70.7147% ( 45) 00:07:06.654 12855.138 - 12905.551: 71.1015% ( 50) 00:07:06.654 12905.551 - 13006.375: 71.8286% ( 94) 00:07:06.654 13006.375 - 13107.200: 72.6872% ( 111) 00:07:06.654 13107.200 - 13208.025: 73.5071% ( 106) 00:07:06.654 13208.025 - 13308.849: 74.2342% ( 94) 00:07:06.654 13308.849 - 13409.674: 74.9768% ( 96) 00:07:06.654 13409.674 - 13510.498: 75.7890% ( 105) 00:07:06.654 13510.498 - 13611.323: 76.6785% ( 115) 00:07:06.654 13611.323 - 13712.148: 77.3360% ( 85) 00:07:06.654 13712.148 - 13812.972: 78.1018% ( 99) 00:07:06.654 13812.972 - 13913.797: 78.8598% ( 98) 00:07:06.654 13913.797 - 14014.622: 79.6566% ( 103) 00:07:06.654 14014.622 - 14115.446: 80.4842% ( 107) 00:07:06.654 14115.446 - 14216.271: 81.2191% ( 95) 00:07:06.654 14216.271 - 14317.095: 82.0312% ( 105) 00:07:06.654 14317.095 - 14417.920: 82.9363% ( 117) 00:07:06.654 14417.920 - 14518.745: 83.8954% ( 124) 00:07:06.654 14518.745 - 14619.569: 84.8778% ( 127) 00:07:06.654 14619.569 - 14720.394: 85.8137% ( 121) 00:07:06.654 14720.394 - 14821.218: 86.7265% ( 118) 00:07:06.654 14821.218 - 14922.043: 87.5464% ( 106) 00:07:06.654 14922.043 - 15022.868: 88.4669% ( 119) 00:07:06.654 15022.868 - 15123.692: 89.2946% ( 107) 00:07:06.654 15123.692 - 15224.517: 90.0139% ( 93) 00:07:06.654 15224.517 - 15325.342: 90.7178% ( 91) 00:07:06.654 15325.342 - 15426.166: 91.4062% ( 89) 00:07:06.654 15426.166 - 15526.991: 92.0328% ( 81) 00:07:06.654 15526.991 - 15627.815: 92.5975% ( 73) 00:07:06.654 15627.815 - 15728.640: 93.0770% ( 62) 00:07:06.654 15728.640 - 15829.465: 93.5025% ( 55) 00:07:06.654 15829.465 - 15930.289: 93.9434% ( 57) 00:07:06.654 15930.289 - 16031.114: 94.3456% ( 52) 00:07:06.654 16031.114 - 16131.938: 94.6860% ( 44) 00:07:06.654 16131.938 - 16232.763: 94.9722% ( 37) 00:07:06.654 16232.763 - 16333.588: 95.2506% ( 36) 00:07:06.654 16333.588 - 16434.412: 95.4981% ( 32) 00:07:06.654 16434.412 - 16535.237: 95.6915% ( 25) 00:07:06.654 16535.237 - 16636.062: 95.8694% ( 23) 00:07:06.654 16636.062 - 16736.886: 96.0319% ( 21) 00:07:06.654 16736.886 - 16837.711: 96.1479% ( 15) 00:07:06.654 16837.711 - 16938.535: 96.2485% ( 13) 00:07:06.654 16938.535 - 17039.360: 96.3800% ( 17) 00:07:06.654 17039.360 - 17140.185: 96.5037% ( 16) 00:07:06.654 17140.185 - 17241.009: 96.6662% ( 21) 00:07:06.654 17241.009 - 17341.834: 96.8750% ( 27) 00:07:06.654 17341.834 - 17442.658: 97.1148% ( 31) 00:07:06.654 17442.658 - 17543.483: 97.3236% ( 27) 00:07:06.654 17543.483 - 17644.308: 97.5402% ( 28) 00:07:06.654 17644.308 - 17745.132: 97.7413% ( 26) 00:07:06.654 17745.132 - 17845.957: 97.9347% ( 25) 00:07:06.654 17845.957 - 17946.782: 98.1590% ( 29) 00:07:06.654 17946.782 - 18047.606: 98.4066% ( 32) 00:07:06.654 18047.606 - 18148.431: 98.6077% ( 26) 00:07:06.654 18148.431 - 18249.255: 98.7546% ( 19) 00:07:06.654 18249.255 - 18350.080: 98.8475% ( 12) 00:07:06.654 18350.080 - 18450.905: 98.8784% ( 4) 00:07:06.654 18450.905 - 18551.729: 98.9325% ( 7) 00:07:06.654 18551.729 - 18652.554: 98.9790% ( 6) 00:07:06.654 18652.554 - 18753.378: 99.0099% ( 4) 00:07:06.654 25105.329 - 25206.154: 99.0331% ( 3) 00:07:06.654 25206.154 - 25306.978: 99.0640% ( 4) 00:07:06.654 25306.978 - 25407.803: 99.0950% ( 4) 00:07:06.654 25407.803 - 25508.628: 99.1259% ( 4) 00:07:06.654 25508.628 - 25609.452: 99.1569% ( 4) 00:07:06.654 25609.452 - 25710.277: 99.1878% ( 4) 00:07:06.654 25710.277 - 25811.102: 99.2265% ( 5) 00:07:06.654 25811.102 - 26012.751: 99.2884% ( 8) 00:07:06.654 26012.751 - 26214.400: 99.3502% ( 8) 00:07:06.654 26214.400 - 26416.049: 99.4199% ( 9) 00:07:06.654 26416.049 - 26617.698: 99.4817% ( 8) 00:07:06.654 26617.698 - 26819.348: 99.5050% ( 3) 00:07:06.654 31860.578 - 32062.228: 99.5591% ( 7) 00:07:06.654 32062.228 - 32263.877: 99.6210% ( 8) 00:07:06.654 32263.877 - 32465.526: 99.6829% ( 8) 00:07:06.654 32465.526 - 32667.175: 99.7447% ( 8) 00:07:06.654 32667.175 - 32868.825: 99.8066% ( 8) 00:07:06.654 32868.825 - 33070.474: 99.8762% ( 9) 00:07:06.654 33070.474 - 33272.123: 99.9304% ( 7) 00:07:06.654 33272.123 - 33473.772: 99.9923% ( 8) 00:07:06.654 33473.772 - 33675.422: 100.0000% ( 1) 00:07:06.654 00:07:06.654 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:06.654 ============================================================================== 00:07:06.654 Range in us Cumulative IO count 00:07:06.654 5570.560 - 5595.766: 0.0077% ( 1) 00:07:06.654 5595.766 - 5620.972: 0.0385% ( 4) 00:07:06.654 5620.972 - 5646.178: 0.0924% ( 7) 00:07:06.654 5646.178 - 5671.385: 0.2463% ( 20) 00:07:06.654 5671.385 - 5696.591: 0.3849% ( 18) 00:07:06.654 5696.591 - 5721.797: 0.5157% ( 17) 00:07:06.654 5721.797 - 5747.003: 0.6927% ( 23) 00:07:06.654 5747.003 - 5772.209: 1.0006% ( 40) 00:07:06.654 5772.209 - 5797.415: 1.4240% ( 55) 00:07:06.654 5797.415 - 5822.622: 1.9935% ( 74) 00:07:06.654 5822.622 - 5847.828: 2.6324% ( 83) 00:07:06.654 5847.828 - 5873.034: 3.4637% ( 108) 00:07:06.654 5873.034 - 5898.240: 4.3334% ( 113) 00:07:06.654 5898.240 - 5923.446: 5.3033% ( 126) 00:07:06.654 5923.446 - 5948.652: 6.2885% ( 128) 00:07:06.654 5948.652 - 5973.858: 7.3507% ( 138) 00:07:06.654 5973.858 - 5999.065: 8.4667% ( 145) 00:07:06.654 5999.065 - 6024.271: 9.6290% ( 151) 00:07:06.654 6024.271 - 6049.477: 10.8067% ( 153) 00:07:06.654 6049.477 - 6074.683: 11.9304% ( 146) 00:07:06.654 6074.683 - 6099.889: 13.1773% ( 162) 00:07:06.654 6099.889 - 6125.095: 14.5012% ( 172) 00:07:06.654 6125.095 - 6150.302: 15.8405% ( 174) 00:07:06.654 6150.302 - 6175.508: 17.1259% ( 167) 00:07:06.654 6175.508 - 6200.714: 18.3575% ( 160) 00:07:06.654 6200.714 - 6225.920: 19.6429% ( 167) 00:07:06.654 6225.920 - 6251.126: 20.9898% ( 175) 00:07:06.654 6251.126 - 6276.332: 22.3214% ( 173) 00:07:06.654 6276.332 - 6301.538: 23.6145% ( 168) 00:07:06.654 6301.538 - 6326.745: 24.9615% ( 175) 00:07:06.654 6326.745 - 6351.951: 26.2700% ( 170) 00:07:06.654 6351.951 - 6377.157: 27.5015% ( 160) 00:07:06.654 6377.157 - 6402.363: 28.7562% ( 163) 00:07:06.654 6402.363 - 6427.569: 30.0262% ( 165) 00:07:06.654 6427.569 - 6452.775: 31.2115% ( 154) 00:07:06.654 6452.775 - 6503.188: 33.6438% ( 316) 00:07:06.654 6503.188 - 6553.600: 35.7759% ( 277) 00:07:06.654 6553.600 - 6604.012: 37.6001% ( 237) 00:07:06.654 6604.012 - 6654.425: 39.1395% ( 200) 00:07:06.654 6654.425 - 6704.837: 40.4865% ( 175) 00:07:06.654 6704.837 - 6755.249: 41.7334% ( 162) 00:07:06.654 6755.249 - 6805.662: 42.6262% ( 116) 00:07:06.654 6805.662 - 6856.074: 43.4652% ( 109) 00:07:06.654 6856.074 - 6906.486: 44.1810% ( 93) 00:07:06.654 6906.486 - 6956.898: 44.7737% ( 77) 00:07:06.654 6956.898 - 7007.311: 45.3510% ( 75) 00:07:06.654 7007.311 - 7057.723: 45.8821% ( 69) 00:07:06.654 7057.723 - 7108.135: 46.3901% ( 66) 00:07:06.654 7108.135 - 7158.548: 46.8211% ( 56) 00:07:06.654 7158.548 - 7208.960: 47.2214% ( 52) 00:07:06.654 7208.960 - 7259.372: 47.6909% ( 61) 00:07:06.654 7259.372 - 7309.785: 48.0834% ( 51) 00:07:06.654 7309.785 - 7360.197: 48.4375% ( 46) 00:07:06.654 7360.197 - 7410.609: 48.7916% ( 46) 00:07:06.654 7410.609 - 7461.022: 49.0687% ( 36) 00:07:06.654 7461.022 - 7511.434: 49.3688% ( 39) 00:07:06.654 7511.434 - 7561.846: 49.6228% ( 33) 00:07:06.654 7561.846 - 7612.258: 49.8845% ( 34) 00:07:06.654 7612.258 - 7662.671: 50.0924% ( 27) 00:07:06.654 7662.671 - 7713.083: 50.2848% ( 25) 00:07:06.654 7713.083 - 7763.495: 50.4926% ( 27) 00:07:06.654 7763.495 - 7813.908: 50.6850% ( 25) 00:07:06.654 7813.908 - 7864.320: 50.8467% ( 21) 00:07:06.654 7864.320 - 7914.732: 50.9852% ( 18) 00:07:06.654 7914.732 - 7965.145: 51.1623% ( 23) 00:07:06.654 7965.145 - 8015.557: 51.3316% ( 22) 00:07:06.654 8015.557 - 8065.969: 51.4932% ( 21) 00:07:06.654 8065.969 - 8116.382: 51.6472% ( 20) 00:07:06.654 8116.382 - 8166.794: 51.8242% ( 23) 00:07:06.654 8166.794 - 8217.206: 51.9935% ( 22) 00:07:06.654 8217.206 - 8267.618: 52.1629% ( 22) 00:07:06.654 8267.618 - 8318.031: 52.3245% ( 21) 00:07:06.654 8318.031 - 8368.443: 52.4938% ( 22) 00:07:06.654 8368.443 - 8418.855: 52.6401% ( 19) 00:07:06.654 8418.855 - 8469.268: 52.7786% ( 18) 00:07:06.654 8469.268 - 8519.680: 52.9172% ( 18) 00:07:06.654 8519.680 - 8570.092: 53.0557% ( 18) 00:07:06.654 8570.092 - 8620.505: 53.2020% ( 19) 00:07:06.654 8620.505 - 8670.917: 53.3482% ( 19) 00:07:06.654 8670.917 - 8721.329: 53.5022% ( 20) 00:07:06.654 8721.329 - 8771.742: 53.6715% ( 22) 00:07:06.654 8771.742 - 8822.154: 53.8485% ( 23) 00:07:06.654 8822.154 - 8872.566: 54.0102% ( 21) 00:07:06.654 8872.566 - 8922.978: 54.1795% ( 22) 00:07:06.654 8922.978 - 8973.391: 54.3488% ( 22) 00:07:06.654 8973.391 - 9023.803: 54.5105% ( 21) 00:07:06.654 9023.803 - 9074.215: 54.6721% ( 21) 00:07:06.654 9074.215 - 9124.628: 54.8337% ( 21) 00:07:06.654 9124.628 - 9175.040: 54.9723% ( 18) 00:07:06.654 9175.040 - 9225.452: 55.0800% ( 14) 00:07:06.654 9225.452 - 9275.865: 55.1955% ( 15) 00:07:06.654 9275.865 - 9326.277: 55.2879% ( 12) 00:07:06.654 9326.277 - 9376.689: 55.3725% ( 11) 00:07:06.654 9376.689 - 9427.102: 55.4264% ( 7) 00:07:06.654 9427.102 - 9477.514: 55.4957% ( 9) 00:07:06.654 9477.514 - 9527.926: 55.5958% ( 13) 00:07:06.654 9527.926 - 9578.338: 55.6881% ( 12) 00:07:06.654 9578.338 - 9628.751: 55.8113% ( 16) 00:07:06.654 9628.751 - 9679.163: 55.9575% ( 19) 00:07:06.654 9679.163 - 9729.575: 56.0807% ( 16) 00:07:06.654 9729.575 - 9779.988: 56.2192% ( 18) 00:07:06.655 9779.988 - 9830.400: 56.3424% ( 16) 00:07:06.655 9830.400 - 9880.812: 56.5425% ( 26) 00:07:06.655 9880.812 - 9931.225: 56.7580% ( 28) 00:07:06.655 9931.225 - 9981.637: 57.0043% ( 32) 00:07:06.655 9981.637 - 10032.049: 57.2506% ( 32) 00:07:06.655 10032.049 - 10082.462: 57.4969% ( 32) 00:07:06.655 10082.462 - 10132.874: 57.7432% ( 32) 00:07:06.655 10132.874 - 10183.286: 58.0203% ( 36) 00:07:06.655 10183.286 - 10233.698: 58.2974% ( 36) 00:07:06.655 10233.698 - 10284.111: 58.5899% ( 38) 00:07:06.655 10284.111 - 10334.523: 58.9209% ( 43) 00:07:06.655 10334.523 - 10384.935: 59.1749% ( 33) 00:07:06.655 10384.935 - 10435.348: 59.4366% ( 34) 00:07:06.655 10435.348 - 10485.760: 59.6983% ( 34) 00:07:06.655 10485.760 - 10536.172: 59.9215% ( 29) 00:07:06.655 10536.172 - 10586.585: 60.1524% ( 30) 00:07:06.655 10586.585 - 10636.997: 60.3602% ( 27) 00:07:06.655 10636.997 - 10687.409: 60.5603% ( 26) 00:07:06.655 10687.409 - 10737.822: 60.8143% ( 33) 00:07:06.655 10737.822 - 10788.234: 60.9991% ( 24) 00:07:06.655 10788.234 - 10838.646: 61.1992% ( 26) 00:07:06.655 10838.646 - 10889.058: 61.3839% ( 24) 00:07:06.655 10889.058 - 10939.471: 61.5302% ( 19) 00:07:06.655 10939.471 - 10989.883: 61.6610% ( 17) 00:07:06.655 10989.883 - 11040.295: 61.8381% ( 23) 00:07:06.655 11040.295 - 11090.708: 62.0228% ( 24) 00:07:06.655 11090.708 - 11141.120: 62.1921% ( 22) 00:07:06.655 11141.120 - 11191.532: 62.3307% ( 18) 00:07:06.655 11191.532 - 11241.945: 62.4538% ( 16) 00:07:06.655 11241.945 - 11292.357: 62.5693% ( 15) 00:07:06.655 11292.357 - 11342.769: 62.7309% ( 21) 00:07:06.655 11342.769 - 11393.182: 62.9002% ( 22) 00:07:06.655 11393.182 - 11443.594: 63.0542% ( 20) 00:07:06.655 11443.594 - 11494.006: 63.3005% ( 32) 00:07:06.655 11494.006 - 11544.418: 63.5391% ( 31) 00:07:06.655 11544.418 - 11594.831: 63.7469% ( 27) 00:07:06.655 11594.831 - 11645.243: 63.9624% ( 28) 00:07:06.655 11645.243 - 11695.655: 64.1626% ( 26) 00:07:06.655 11695.655 - 11746.068: 64.3935% ( 30) 00:07:06.655 11746.068 - 11796.480: 64.6167% ( 29) 00:07:06.655 11796.480 - 11846.892: 64.8707% ( 33) 00:07:06.655 11846.892 - 11897.305: 65.1324% ( 34) 00:07:06.655 11897.305 - 11947.717: 65.3787% ( 32) 00:07:06.655 11947.717 - 11998.129: 65.6173% ( 31) 00:07:06.655 11998.129 - 12048.542: 65.8251% ( 27) 00:07:06.655 12048.542 - 12098.954: 65.9637% ( 18) 00:07:06.655 12098.954 - 12149.366: 66.1330% ( 22) 00:07:06.655 12149.366 - 12199.778: 66.3485% ( 28) 00:07:06.655 12199.778 - 12250.191: 66.5948% ( 32) 00:07:06.655 12250.191 - 12300.603: 66.8334% ( 31) 00:07:06.655 12300.603 - 12351.015: 67.1259% ( 38) 00:07:06.655 12351.015 - 12401.428: 67.4569% ( 43) 00:07:06.655 12401.428 - 12451.840: 67.8033% ( 45) 00:07:06.655 12451.840 - 12502.252: 68.1111% ( 40) 00:07:06.655 12502.252 - 12552.665: 68.4190% ( 40) 00:07:06.655 12552.665 - 12603.077: 68.7038% ( 37) 00:07:06.655 12603.077 - 12653.489: 68.9809% ( 36) 00:07:06.655 12653.489 - 12703.902: 69.2657% ( 37) 00:07:06.655 12703.902 - 12754.314: 69.5890% ( 42) 00:07:06.655 12754.314 - 12804.726: 69.9200% ( 43) 00:07:06.655 12804.726 - 12855.138: 70.2817% ( 47) 00:07:06.655 12855.138 - 12905.551: 70.6589% ( 49) 00:07:06.655 12905.551 - 13006.375: 71.5132% ( 111) 00:07:06.655 13006.375 - 13107.200: 72.2906% ( 101) 00:07:06.655 13107.200 - 13208.025: 73.1219% ( 108) 00:07:06.655 13208.025 - 13308.849: 74.0225% ( 117) 00:07:06.655 13308.849 - 13409.674: 74.8615% ( 109) 00:07:06.655 13409.674 - 13510.498: 75.7312% ( 113) 00:07:06.655 13510.498 - 13611.323: 76.6395% ( 118) 00:07:06.655 13611.323 - 13712.148: 77.4169% ( 101) 00:07:06.655 13712.148 - 13812.972: 78.1866% ( 100) 00:07:06.655 13812.972 - 13913.797: 79.1179% ( 121) 00:07:06.655 13913.797 - 14014.622: 80.0877% ( 126) 00:07:06.655 14014.622 - 14115.446: 81.1192% ( 134) 00:07:06.655 14115.446 - 14216.271: 82.1044% ( 128) 00:07:06.655 14216.271 - 14317.095: 83.0203% ( 119) 00:07:06.655 14317.095 - 14417.920: 83.9517% ( 121) 00:07:06.655 14417.920 - 14518.745: 84.8676% ( 119) 00:07:06.655 14518.745 - 14619.569: 85.7528% ( 115) 00:07:06.655 14619.569 - 14720.394: 86.7611% ( 131) 00:07:06.655 14720.394 - 14821.218: 87.6693% ( 118) 00:07:06.655 14821.218 - 14922.043: 88.4083% ( 96) 00:07:06.655 14922.043 - 15022.868: 89.0471% ( 83) 00:07:06.655 15022.868 - 15123.692: 89.6398% ( 77) 00:07:06.655 15123.692 - 15224.517: 90.2401% ( 78) 00:07:06.655 15224.517 - 15325.342: 90.8251% ( 76) 00:07:06.655 15325.342 - 15426.166: 91.3177% ( 64) 00:07:06.655 15426.166 - 15526.991: 91.7180% ( 52) 00:07:06.655 15526.991 - 15627.815: 92.0643% ( 45) 00:07:06.655 15627.815 - 15728.640: 92.4261% ( 47) 00:07:06.655 15728.640 - 15829.465: 92.9418% ( 67) 00:07:06.655 15829.465 - 15930.289: 93.3805% ( 57) 00:07:06.655 15930.289 - 16031.114: 93.7962% ( 54) 00:07:06.655 16031.114 - 16131.938: 94.1964% ( 52) 00:07:06.655 16131.938 - 16232.763: 94.5736% ( 49) 00:07:06.655 16232.763 - 16333.588: 94.9353% ( 47) 00:07:06.655 16333.588 - 16434.412: 95.2509% ( 41) 00:07:06.655 16434.412 - 16535.237: 95.5819% ( 43) 00:07:06.655 16535.237 - 16636.062: 95.8667% ( 37) 00:07:06.655 16636.062 - 16736.886: 96.0283% ( 21) 00:07:06.655 16736.886 - 16837.711: 96.1977% ( 22) 00:07:06.655 16837.711 - 16938.535: 96.3131% ( 15) 00:07:06.655 16938.535 - 17039.360: 96.4671% ( 20) 00:07:06.655 17039.360 - 17140.185: 96.6518% ( 24) 00:07:06.655 17140.185 - 17241.009: 96.8673% ( 28) 00:07:06.655 17241.009 - 17341.834: 97.1521% ( 37) 00:07:06.655 17341.834 - 17442.658: 97.4446% ( 38) 00:07:06.655 17442.658 - 17543.483: 97.7371% ( 38) 00:07:06.655 17543.483 - 17644.308: 97.9834% ( 32) 00:07:06.655 17644.308 - 17745.132: 98.2220% ( 31) 00:07:06.655 17745.132 - 17845.957: 98.4529% ( 30) 00:07:06.655 17845.957 - 17946.782: 98.6761% ( 29) 00:07:06.655 17946.782 - 18047.606: 98.8839% ( 27) 00:07:06.655 18047.606 - 18148.431: 99.0610% ( 23) 00:07:06.655 18148.431 - 18249.255: 99.1687% ( 14) 00:07:06.655 18249.255 - 18350.080: 99.2226% ( 7) 00:07:06.655 18350.080 - 18450.905: 99.2842% ( 8) 00:07:06.655 18450.905 - 18551.729: 99.3381% ( 7) 00:07:06.655 18551.729 - 18652.554: 99.4073% ( 9) 00:07:06.655 18652.554 - 18753.378: 99.4612% ( 7) 00:07:06.655 18753.378 - 18854.203: 99.4997% ( 5) 00:07:06.655 18854.203 - 18955.028: 99.5074% ( 1) 00:07:06.655 24702.031 - 24802.855: 99.5151% ( 1) 00:07:06.655 24802.855 - 24903.680: 99.5382% ( 3) 00:07:06.655 24903.680 - 25004.505: 99.5690% ( 4) 00:07:06.655 25004.505 - 25105.329: 99.6075% ( 5) 00:07:06.655 25105.329 - 25206.154: 99.6382% ( 4) 00:07:06.655 25206.154 - 25306.978: 99.6690% ( 4) 00:07:06.655 25306.978 - 25407.803: 99.6998% ( 4) 00:07:06.655 25407.803 - 25508.628: 99.7306% ( 4) 00:07:06.655 25508.628 - 25609.452: 99.7614% ( 4) 00:07:06.655 25609.452 - 25710.277: 99.7922% ( 4) 00:07:06.655 25710.277 - 25811.102: 99.8307% ( 5) 00:07:06.655 25811.102 - 26012.751: 99.8922% ( 8) 00:07:06.655 26012.751 - 26214.400: 99.9538% ( 8) 00:07:06.655 26214.400 - 26416.049: 100.0000% ( 6) 00:07:06.655 00:07:06.655 10:35:32 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:08.044 Initializing NVMe Controllers 00:07:08.044 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:08.044 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:08.044 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:08.044 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:08.044 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:08.044 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:08.044 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:08.044 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:08.044 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:08.044 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:08.044 Initialization complete. Launching workers. 00:07:08.044 ======================================================== 00:07:08.044 Latency(us) 00:07:08.044 Device Information : IOPS MiB/s Average min max 00:07:08.044 PCIE (0000:00:10.0) NSID 1 from core 0: 17510.17 205.20 7342.53 5673.22 31681.24 00:07:08.044 PCIE (0000:00:11.0) NSID 1 from core 0: 17510.17 205.20 7339.17 5744.22 30898.37 00:07:08.044 PCIE (0000:00:13.0) NSID 1 from core 0: 17510.17 205.20 7336.17 5818.43 30362.84 00:07:08.044 PCIE (0000:00:12.0) NSID 1 from core 0: 17510.17 205.20 7333.04 5859.29 29598.51 00:07:08.044 PCIE (0000:00:12.0) NSID 2 from core 0: 17510.17 205.20 7328.87 5783.47 27847.67 00:07:08.044 PCIE (0000:00:12.0) NSID 3 from core 0: 17510.17 205.20 7320.85 5883.98 25453.02 00:07:08.044 ======================================================== 00:07:08.044 Total : 105061.03 1231.18 7333.44 5673.22 31681.24 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6074.683us 00:07:08.044 10.00000% : 6402.363us 00:07:08.044 25.00000% : 6604.012us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7360.197us 00:07:08.044 90.00000% : 8519.680us 00:07:08.044 95.00000% : 9830.400us 00:07:08.044 98.00000% : 12451.840us 00:07:08.044 99.00000% : 13812.972us 00:07:08.044 99.50000% : 22786.363us 00:07:08.044 99.90000% : 31053.982us 00:07:08.044 99.99000% : 31658.929us 00:07:08.044 99.99900% : 31860.578us 00:07:08.044 99.99990% : 31860.578us 00:07:08.044 99.99999% : 31860.578us 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6175.508us 00:07:08.044 10.00000% : 6503.188us 00:07:08.044 25.00000% : 6654.425us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7259.372us 00:07:08.044 90.00000% : 8519.680us 00:07:08.044 95.00000% : 9830.400us 00:07:08.044 98.00000% : 12300.603us 00:07:08.044 99.00000% : 14014.622us 00:07:08.044 99.50000% : 22181.415us 00:07:08.044 99.90000% : 30650.683us 00:07:08.044 99.99000% : 31053.982us 00:07:08.044 99.99900% : 31053.982us 00:07:08.044 99.99990% : 31053.982us 00:07:08.044 99.99999% : 31053.982us 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6200.714us 00:07:08.044 10.00000% : 6452.775us 00:07:08.044 25.00000% : 6654.425us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7309.785us 00:07:08.044 90.00000% : 8519.680us 00:07:08.044 95.00000% : 10233.698us 00:07:08.044 98.00000% : 12351.015us 00:07:08.044 99.00000% : 13611.323us 00:07:08.044 99.50000% : 21576.468us 00:07:08.044 99.90000% : 29642.437us 00:07:08.044 99.99000% : 30449.034us 00:07:08.044 99.99900% : 30449.034us 00:07:08.044 99.99990% : 30449.034us 00:07:08.044 99.99999% : 30449.034us 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6200.714us 00:07:08.044 10.00000% : 6452.775us 00:07:08.044 25.00000% : 6654.425us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7259.372us 00:07:08.044 90.00000% : 8418.855us 00:07:08.044 95.00000% : 10384.935us 00:07:08.044 98.00000% : 12502.252us 00:07:08.044 99.00000% : 13812.972us 00:07:08.044 99.50000% : 20971.520us 00:07:08.044 99.90000% : 29440.788us 00:07:08.044 99.99000% : 29642.437us 00:07:08.044 99.99900% : 29642.437us 00:07:08.044 99.99990% : 29642.437us 00:07:08.044 99.99999% : 29642.437us 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6200.714us 00:07:08.044 10.00000% : 6452.775us 00:07:08.044 25.00000% : 6654.425us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7259.372us 00:07:08.044 90.00000% : 8418.855us 00:07:08.044 95.00000% : 10334.523us 00:07:08.044 98.00000% : 12855.138us 00:07:08.044 99.00000% : 14014.622us 00:07:08.044 99.50000% : 20669.046us 00:07:08.044 99.90000% : 26819.348us 00:07:08.044 99.99000% : 27827.594us 00:07:08.044 99.99900% : 28029.243us 00:07:08.044 99.99990% : 28029.243us 00:07:08.044 99.99999% : 28029.243us 00:07:08.044 00:07:08.044 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:08.044 ================================================================================= 00:07:08.044 1.00000% : 6200.714us 00:07:08.044 10.00000% : 6452.775us 00:07:08.044 25.00000% : 6654.425us 00:07:08.044 50.00000% : 6856.074us 00:07:08.044 75.00000% : 7259.372us 00:07:08.044 90.00000% : 8519.680us 00:07:08.044 95.00000% : 10082.462us 00:07:08.044 98.00000% : 13107.200us 00:07:08.044 99.00000% : 14115.446us 00:07:08.044 99.50000% : 20265.748us 00:07:08.044 99.90000% : 25407.803us 00:07:08.044 99.99000% : 25508.628us 00:07:08.044 99.99900% : 25508.628us 00:07:08.044 99.99990% : 25508.628us 00:07:08.044 99.99999% : 25508.628us 00:07:08.044 00:07:08.044 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:08.044 ============================================================================== 00:07:08.044 Range in us Cumulative IO count 00:07:08.044 5671.385 - 5696.591: 0.0114% ( 2) 00:07:08.044 5696.591 - 5721.797: 0.0171% ( 1) 00:07:08.044 5721.797 - 5747.003: 0.0228% ( 1) 00:07:08.044 5747.003 - 5772.209: 0.0285% ( 1) 00:07:08.044 5772.209 - 5797.415: 0.0399% ( 2) 00:07:08.044 5797.415 - 5822.622: 0.0570% ( 3) 00:07:08.044 5822.622 - 5847.828: 0.0969% ( 7) 00:07:08.044 5847.828 - 5873.034: 0.1882% ( 16) 00:07:08.044 5873.034 - 5898.240: 0.2566% ( 12) 00:07:08.044 5898.240 - 5923.446: 0.3193% ( 11) 00:07:08.044 5923.446 - 5948.652: 0.3821% ( 11) 00:07:08.044 5948.652 - 5973.858: 0.4334% ( 9) 00:07:08.044 5973.858 - 5999.065: 0.5018% ( 12) 00:07:08.044 5999.065 - 6024.271: 0.6273% ( 22) 00:07:08.044 6024.271 - 6049.477: 0.7641% ( 24) 00:07:08.044 6049.477 - 6074.683: 1.0151% ( 44) 00:07:08.044 6074.683 - 6099.889: 1.2146% ( 35) 00:07:08.044 6099.889 - 6125.095: 1.5796% ( 64) 00:07:08.044 6125.095 - 6150.302: 2.0016% ( 74) 00:07:08.044 6150.302 - 6175.508: 2.3209% ( 56) 00:07:08.044 6175.508 - 6200.714: 2.7771% ( 80) 00:07:08.044 6200.714 - 6225.920: 3.2676% ( 86) 00:07:08.044 6225.920 - 6251.126: 4.0659% ( 140) 00:07:08.044 6251.126 - 6276.332: 5.0240% ( 168) 00:07:08.044 6276.332 - 6301.538: 6.0390% ( 178) 00:07:08.044 6301.538 - 6326.745: 7.0141% ( 171) 00:07:08.045 6326.745 - 6351.951: 8.1889% ( 206) 00:07:08.045 6351.951 - 6377.157: 9.5062% ( 231) 00:07:08.045 6377.157 - 6402.363: 11.0287% ( 267) 00:07:08.045 6402.363 - 6427.569: 12.7110% ( 295) 00:07:08.045 6427.569 - 6452.775: 14.7582% ( 359) 00:07:08.045 6452.775 - 6503.188: 18.4307% ( 644) 00:07:08.045 6503.188 - 6553.600: 23.4945% ( 888) 00:07:08.045 6553.600 - 6604.012: 28.6896% ( 911) 00:07:08.045 6604.012 - 6654.425: 33.5196% ( 847) 00:07:08.045 6654.425 - 6704.837: 38.2641% ( 832) 00:07:08.045 6704.837 - 6755.249: 42.8889% ( 811) 00:07:08.045 6755.249 - 6805.662: 47.7817% ( 858) 00:07:08.045 6805.662 - 6856.074: 51.8077% ( 706) 00:07:08.045 6856.074 - 6906.486: 55.7254% ( 687) 00:07:08.045 6906.486 - 6956.898: 58.6850% ( 519) 00:07:08.045 6956.898 - 7007.311: 61.4507% ( 485) 00:07:08.045 7007.311 - 7057.723: 64.2735% ( 495) 00:07:08.045 7057.723 - 7108.135: 66.9423% ( 468) 00:07:08.045 7108.135 - 7158.548: 69.2005% ( 396) 00:07:08.045 7158.548 - 7208.960: 71.2249% ( 355) 00:07:08.045 7208.960 - 7259.372: 73.2493% ( 355) 00:07:08.045 7259.372 - 7309.785: 74.8802% ( 286) 00:07:08.045 7309.785 - 7360.197: 76.2318% ( 237) 00:07:08.045 7360.197 - 7410.609: 77.3438% ( 195) 00:07:08.045 7410.609 - 7461.022: 78.4786% ( 199) 00:07:08.045 7461.022 - 7511.434: 79.4252% ( 166) 00:07:08.045 7511.434 - 7561.846: 80.3490% ( 162) 00:07:08.045 7561.846 - 7612.258: 81.0903% ( 130) 00:07:08.045 7612.258 - 7662.671: 81.7290% ( 112) 00:07:08.045 7662.671 - 7713.083: 82.3791% ( 114) 00:07:08.045 7713.083 - 7763.495: 82.9836% ( 106) 00:07:08.045 7763.495 - 7813.908: 83.6736% ( 121) 00:07:08.045 7813.908 - 7864.320: 84.1868% ( 90) 00:07:08.045 7864.320 - 7914.732: 84.6772% ( 86) 00:07:08.045 7914.732 - 7965.145: 85.2133% ( 94) 00:07:08.045 7965.145 - 8015.557: 85.6410% ( 75) 00:07:08.045 8015.557 - 8065.969: 86.0972% ( 80) 00:07:08.045 8065.969 - 8116.382: 86.7872% ( 121) 00:07:08.045 8116.382 - 8166.794: 87.2890% ( 88) 00:07:08.045 8166.794 - 8217.206: 87.7851% ( 87) 00:07:08.045 8217.206 - 8267.618: 88.1159% ( 58) 00:07:08.045 8267.618 - 8318.031: 88.4865% ( 65) 00:07:08.045 8318.031 - 8368.443: 88.9085% ( 74) 00:07:08.045 8368.443 - 8418.855: 89.2393% ( 58) 00:07:08.045 8418.855 - 8469.268: 89.6556% ( 73) 00:07:08.045 8469.268 - 8519.680: 90.0604% ( 71) 00:07:08.045 8519.680 - 8570.092: 90.4026% ( 60) 00:07:08.045 8570.092 - 8620.505: 90.6478% ( 43) 00:07:08.045 8620.505 - 8670.917: 91.0185% ( 65) 00:07:08.045 8670.917 - 8721.329: 91.3378% ( 56) 00:07:08.045 8721.329 - 8771.742: 91.6629% ( 57) 00:07:08.045 8771.742 - 8822.154: 91.9708% ( 54) 00:07:08.045 8822.154 - 8872.566: 92.2844% ( 55) 00:07:08.045 8872.566 - 8922.978: 92.5468% ( 46) 00:07:08.045 8922.978 - 8973.391: 92.8205% ( 48) 00:07:08.045 8973.391 - 9023.803: 93.0372% ( 38) 00:07:08.045 9023.803 - 9074.215: 93.2311% ( 34) 00:07:08.045 9074.215 - 9124.628: 93.3736% ( 25) 00:07:08.045 9124.628 - 9175.040: 93.5504% ( 31) 00:07:08.045 9175.040 - 9225.452: 93.6759% ( 22) 00:07:08.045 9225.452 - 9275.865: 93.8127% ( 24) 00:07:08.045 9275.865 - 9326.277: 93.8926% ( 14) 00:07:08.045 9326.277 - 9376.689: 93.9838% ( 16) 00:07:08.045 9376.689 - 9427.102: 94.0979% ( 20) 00:07:08.045 9427.102 - 9477.514: 94.2062% ( 19) 00:07:08.045 9477.514 - 9527.926: 94.3659% ( 28) 00:07:08.045 9527.926 - 9578.338: 94.5370% ( 30) 00:07:08.045 9578.338 - 9628.751: 94.6510% ( 20) 00:07:08.045 9628.751 - 9679.163: 94.7422% ( 16) 00:07:08.045 9679.163 - 9729.575: 94.8107% ( 12) 00:07:08.045 9729.575 - 9779.988: 94.9076% ( 17) 00:07:08.045 9779.988 - 9830.400: 95.0046% ( 17) 00:07:08.045 9830.400 - 9880.812: 95.1699% ( 29) 00:07:08.045 9880.812 - 9931.225: 95.2327% ( 11) 00:07:08.045 9931.225 - 9981.637: 95.3182% ( 15) 00:07:08.045 9981.637 - 10032.049: 95.3695% ( 9) 00:07:08.045 10032.049 - 10082.462: 95.4551% ( 15) 00:07:08.045 10082.462 - 10132.874: 95.5976% ( 25) 00:07:08.045 10132.874 - 10183.286: 95.7516% ( 27) 00:07:08.045 10183.286 - 10233.698: 95.8200% ( 12) 00:07:08.045 10233.698 - 10284.111: 95.8656% ( 8) 00:07:08.045 10284.111 - 10334.523: 95.9284% ( 11) 00:07:08.045 10334.523 - 10384.935: 95.9797% ( 9) 00:07:08.045 10384.935 - 10435.348: 96.0139% ( 6) 00:07:08.045 10435.348 - 10485.760: 96.0652% ( 9) 00:07:08.045 10485.760 - 10536.172: 96.1223% ( 10) 00:07:08.045 10536.172 - 10586.585: 96.1565% ( 6) 00:07:08.045 10586.585 - 10636.997: 96.1964% ( 7) 00:07:08.045 10636.997 - 10687.409: 96.2306% ( 6) 00:07:08.045 10687.409 - 10737.822: 96.2819% ( 9) 00:07:08.045 10737.822 - 10788.234: 96.3333% ( 9) 00:07:08.045 10788.234 - 10838.646: 96.4473% ( 20) 00:07:08.045 10838.646 - 10889.058: 96.4530% ( 1) 00:07:08.045 10889.058 - 10939.471: 96.4701% ( 3) 00:07:08.045 10939.471 - 10989.883: 96.4815% ( 2) 00:07:08.045 10989.883 - 11040.295: 96.5100% ( 5) 00:07:08.045 11090.708 - 11141.120: 96.5271% ( 3) 00:07:08.045 11141.120 - 11191.532: 96.5443% ( 3) 00:07:08.045 11191.532 - 11241.945: 96.5614% ( 3) 00:07:08.045 11241.945 - 11292.357: 96.5785% ( 3) 00:07:08.045 11292.357 - 11342.769: 96.6013% ( 4) 00:07:08.045 11342.769 - 11393.182: 96.6412% ( 7) 00:07:08.045 11393.182 - 11443.594: 96.6811% ( 7) 00:07:08.045 11443.594 - 11494.006: 96.7153% ( 6) 00:07:08.045 11494.006 - 11544.418: 96.7552% ( 7) 00:07:08.045 11544.418 - 11594.831: 96.9206% ( 29) 00:07:08.045 11594.831 - 11645.243: 97.0233% ( 18) 00:07:08.045 11645.243 - 11695.655: 97.0632% ( 7) 00:07:08.045 11695.655 - 11746.068: 97.1145% ( 9) 00:07:08.045 11746.068 - 11796.480: 97.1373% ( 4) 00:07:08.045 11796.480 - 11846.892: 97.1943% ( 10) 00:07:08.045 11846.892 - 11897.305: 97.2457% ( 9) 00:07:08.045 11897.305 - 11947.717: 97.2913% ( 8) 00:07:08.045 11947.717 - 11998.129: 97.3483% ( 10) 00:07:08.045 11998.129 - 12048.542: 97.3768% ( 5) 00:07:08.045 12048.542 - 12098.954: 97.4224% ( 8) 00:07:08.045 12098.954 - 12149.366: 97.5023% ( 14) 00:07:08.045 12149.366 - 12199.778: 97.5935% ( 16) 00:07:08.045 12199.778 - 12250.191: 97.6962% ( 18) 00:07:08.045 12250.191 - 12300.603: 97.8273% ( 23) 00:07:08.045 12300.603 - 12351.015: 97.9243% ( 17) 00:07:08.045 12351.015 - 12401.428: 97.9870% ( 11) 00:07:08.045 12401.428 - 12451.840: 98.0212% ( 6) 00:07:08.045 12451.840 - 12502.252: 98.0554% ( 6) 00:07:08.045 12502.252 - 12552.665: 98.0896% ( 6) 00:07:08.045 12552.665 - 12603.077: 98.1296% ( 7) 00:07:08.045 12603.077 - 12653.489: 98.1752% ( 8) 00:07:08.045 12653.489 - 12703.902: 98.2208% ( 8) 00:07:08.045 12703.902 - 12754.314: 98.2664% ( 8) 00:07:08.045 12754.314 - 12804.726: 98.2949% ( 5) 00:07:08.045 12804.726 - 12855.138: 98.3406% ( 8) 00:07:08.045 12855.138 - 12905.551: 98.5230% ( 32) 00:07:08.045 12905.551 - 13006.375: 98.6200% ( 17) 00:07:08.045 13006.375 - 13107.200: 98.7055% ( 15) 00:07:08.045 13107.200 - 13208.025: 98.7797% ( 13) 00:07:08.045 13208.025 - 13308.849: 98.8367% ( 10) 00:07:08.045 13308.849 - 13409.674: 98.8766% ( 7) 00:07:08.045 13409.674 - 13510.498: 98.9279% ( 9) 00:07:08.045 13510.498 - 13611.323: 98.9507% ( 4) 00:07:08.045 13611.323 - 13712.148: 98.9621% ( 2) 00:07:08.045 13712.148 - 13812.972: 99.0135% ( 9) 00:07:08.045 13812.972 - 13913.797: 99.0933% ( 14) 00:07:08.045 13913.797 - 14014.622: 99.1446% ( 9) 00:07:08.045 14014.622 - 14115.446: 99.1959% ( 9) 00:07:08.045 14216.271 - 14317.095: 99.2016% ( 1) 00:07:08.045 14317.095 - 14417.920: 99.2302% ( 5) 00:07:08.045 14417.920 - 14518.745: 99.2530% ( 4) 00:07:08.045 14518.745 - 14619.569: 99.2701% ( 3) 00:07:08.045 21979.766 - 22080.591: 99.3556% ( 15) 00:07:08.045 22080.591 - 22181.415: 99.4012% ( 8) 00:07:08.045 22181.415 - 22282.240: 99.4354% ( 6) 00:07:08.045 22282.240 - 22383.065: 99.4583% ( 4) 00:07:08.045 22383.065 - 22483.889: 99.4697% ( 2) 00:07:08.045 22483.889 - 22584.714: 99.4811% ( 2) 00:07:08.045 22584.714 - 22685.538: 99.4925% ( 2) 00:07:08.046 22685.538 - 22786.363: 99.5267% ( 6) 00:07:08.046 22786.363 - 22887.188: 99.5609% ( 6) 00:07:08.046 22988.012 - 23088.837: 99.5666% ( 1) 00:07:08.046 23088.837 - 23189.662: 99.5780% ( 2) 00:07:08.046 23189.662 - 23290.486: 99.5951% ( 3) 00:07:08.046 23290.486 - 23391.311: 99.6179% ( 4) 00:07:08.046 23391.311 - 23492.135: 99.6350% ( 3) 00:07:08.046 29239.138 - 29440.788: 99.6693% ( 6) 00:07:08.046 29440.788 - 29642.437: 99.7092% ( 7) 00:07:08.046 29642.437 - 29844.086: 99.7377% ( 5) 00:07:08.046 29844.086 - 30045.735: 99.7605% ( 4) 00:07:08.046 30045.735 - 30247.385: 99.7890% ( 5) 00:07:08.046 30247.385 - 30449.034: 99.8232% ( 6) 00:07:08.046 30449.034 - 30650.683: 99.8517% ( 5) 00:07:08.046 30650.683 - 30852.332: 99.8745% ( 4) 00:07:08.046 30852.332 - 31053.982: 99.9088% ( 6) 00:07:08.046 31053.982 - 31255.631: 99.9373% ( 5) 00:07:08.046 31255.631 - 31457.280: 99.9715% ( 6) 00:07:08.046 31457.280 - 31658.929: 99.9943% ( 4) 00:07:08.046 31658.929 - 31860.578: 100.0000% ( 1) 00:07:08.046 00:07:08.046 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:08.046 ============================================================================== 00:07:08.046 Range in us Cumulative IO count 00:07:08.046 5721.797 - 5747.003: 0.0057% ( 1) 00:07:08.046 5873.034 - 5898.240: 0.0114% ( 1) 00:07:08.046 5898.240 - 5923.446: 0.0399% ( 5) 00:07:08.046 5923.446 - 5948.652: 0.0684% ( 5) 00:07:08.046 5948.652 - 5973.858: 0.1026% ( 6) 00:07:08.046 5973.858 - 5999.065: 0.2053% ( 18) 00:07:08.046 5999.065 - 6024.271: 0.3593% ( 27) 00:07:08.046 6024.271 - 6049.477: 0.4391% ( 14) 00:07:08.046 6049.477 - 6074.683: 0.4904% ( 9) 00:07:08.046 6074.683 - 6099.889: 0.5931% ( 18) 00:07:08.046 6099.889 - 6125.095: 0.6900% ( 17) 00:07:08.046 6125.095 - 6150.302: 0.9580% ( 47) 00:07:08.046 6150.302 - 6175.508: 1.1519% ( 34) 00:07:08.046 6175.508 - 6200.714: 1.6024% ( 79) 00:07:08.046 6200.714 - 6225.920: 1.8077% ( 36) 00:07:08.046 6225.920 - 6251.126: 2.0871% ( 49) 00:07:08.046 6251.126 - 6276.332: 2.7885% ( 123) 00:07:08.046 6276.332 - 6301.538: 3.6724% ( 155) 00:07:08.046 6301.538 - 6326.745: 4.3510% ( 119) 00:07:08.046 6326.745 - 6351.951: 4.9783% ( 110) 00:07:08.046 6351.951 - 6377.157: 5.8907% ( 160) 00:07:08.046 6377.157 - 6402.363: 6.6549% ( 134) 00:07:08.046 6402.363 - 6427.569: 8.1375% ( 260) 00:07:08.046 6427.569 - 6452.775: 9.5974% ( 256) 00:07:08.046 6452.775 - 6503.188: 12.9733% ( 592) 00:07:08.046 6503.188 - 6553.600: 17.3244% ( 763) 00:07:08.046 6553.600 - 6604.012: 23.6086% ( 1102) 00:07:08.046 6604.012 - 6654.425: 29.7160% ( 1071) 00:07:08.046 6654.425 - 6704.837: 36.2511% ( 1146) 00:07:08.046 6704.837 - 6755.249: 42.6266% ( 1118) 00:07:08.046 6755.249 - 6805.662: 49.2245% ( 1157) 00:07:08.046 6805.662 - 6856.074: 54.3339% ( 896) 00:07:08.046 6856.074 - 6906.486: 58.3029% ( 696) 00:07:08.046 6906.486 - 6956.898: 63.0474% ( 832) 00:07:08.046 6956.898 - 7007.311: 66.3264% ( 575) 00:07:08.046 7007.311 - 7057.723: 68.9895% ( 467) 00:07:08.046 7057.723 - 7108.135: 71.1223% ( 374) 00:07:08.046 7108.135 - 7158.548: 72.5422% ( 249) 00:07:08.046 7158.548 - 7208.960: 74.1959% ( 290) 00:07:08.046 7208.960 - 7259.372: 75.5303% ( 234) 00:07:08.046 7259.372 - 7309.785: 76.8476% ( 231) 00:07:08.046 7309.785 - 7360.197: 77.6232% ( 136) 00:07:08.046 7360.197 - 7410.609: 78.3246% ( 123) 00:07:08.046 7410.609 - 7461.022: 79.1286% ( 141) 00:07:08.046 7461.022 - 7511.434: 79.8643% ( 129) 00:07:08.046 7511.434 - 7561.846: 80.3889% ( 92) 00:07:08.046 7561.846 - 7612.258: 81.2215% ( 146) 00:07:08.046 7612.258 - 7662.671: 81.8317% ( 107) 00:07:08.046 7662.671 - 7713.083: 82.3677% ( 94) 00:07:08.046 7713.083 - 7763.495: 83.3371% ( 170) 00:07:08.046 7763.495 - 7813.908: 83.7933% ( 80) 00:07:08.046 7813.908 - 7864.320: 84.2724% ( 84) 00:07:08.046 7864.320 - 7914.732: 84.8141% ( 95) 00:07:08.046 7914.732 - 7965.145: 85.3786% ( 99) 00:07:08.046 7965.145 - 8015.557: 85.8976% ( 91) 00:07:08.046 8015.557 - 8065.969: 86.2911% ( 69) 00:07:08.046 8065.969 - 8116.382: 86.6389% ( 61) 00:07:08.046 8116.382 - 8166.794: 87.0552% ( 73) 00:07:08.046 8166.794 - 8217.206: 87.6768% ( 109) 00:07:08.046 8217.206 - 8267.618: 88.1102% ( 76) 00:07:08.046 8267.618 - 8318.031: 88.5892% ( 84) 00:07:08.046 8318.031 - 8368.443: 88.9599% ( 65) 00:07:08.046 8368.443 - 8418.855: 89.3419% ( 67) 00:07:08.046 8418.855 - 8469.268: 89.8609% ( 91) 00:07:08.046 8469.268 - 8519.680: 90.3342% ( 83) 00:07:08.046 8519.680 - 8570.092: 90.7048% ( 65) 00:07:08.046 8570.092 - 8620.505: 91.0014% ( 52) 00:07:08.046 8620.505 - 8670.917: 91.1553% ( 27) 00:07:08.046 8670.917 - 8721.329: 91.4690% ( 55) 00:07:08.046 8721.329 - 8771.742: 91.7085% ( 42) 00:07:08.046 8771.742 - 8822.154: 91.9024% ( 34) 00:07:08.046 8822.154 - 8872.566: 92.1020% ( 35) 00:07:08.046 8872.566 - 8922.978: 92.2616% ( 28) 00:07:08.046 8922.978 - 8973.391: 92.4099% ( 26) 00:07:08.046 8973.391 - 9023.803: 92.6494% ( 42) 00:07:08.046 9023.803 - 9074.215: 93.0372% ( 68) 00:07:08.046 9074.215 - 9124.628: 93.2881% ( 44) 00:07:08.046 9124.628 - 9175.040: 93.5219% ( 41) 00:07:08.046 9175.040 - 9225.452: 93.6873% ( 29) 00:07:08.046 9225.452 - 9275.865: 93.9610% ( 48) 00:07:08.046 9275.865 - 9326.277: 94.1606% ( 35) 00:07:08.046 9326.277 - 9376.689: 94.3773% ( 38) 00:07:08.046 9376.689 - 9427.102: 94.4571% ( 14) 00:07:08.046 9427.102 - 9477.514: 94.5598% ( 18) 00:07:08.046 9477.514 - 9527.926: 94.6339% ( 13) 00:07:08.046 9527.926 - 9578.338: 94.7023% ( 12) 00:07:08.046 9578.338 - 9628.751: 94.7765% ( 13) 00:07:08.046 9628.751 - 9679.163: 94.8620% ( 15) 00:07:08.046 9679.163 - 9729.575: 94.9133% ( 9) 00:07:08.046 9729.575 - 9779.988: 94.9875% ( 13) 00:07:08.046 9779.988 - 9830.400: 95.0388% ( 9) 00:07:08.046 9830.400 - 9880.812: 95.0787% ( 7) 00:07:08.046 9880.812 - 9931.225: 95.1243% ( 8) 00:07:08.046 9931.225 - 9981.637: 95.1870% ( 11) 00:07:08.046 9981.637 - 10032.049: 95.2327% ( 8) 00:07:08.046 10032.049 - 10082.462: 95.3068% ( 13) 00:07:08.046 10082.462 - 10132.874: 95.4437% ( 24) 00:07:08.046 10132.874 - 10183.286: 95.5292% ( 15) 00:07:08.046 10183.286 - 10233.698: 95.6318% ( 18) 00:07:08.046 10233.698 - 10284.111: 95.6718% ( 7) 00:07:08.046 10284.111 - 10334.523: 95.7174% ( 8) 00:07:08.046 10334.523 - 10384.935: 95.7630% ( 8) 00:07:08.046 10384.935 - 10435.348: 95.7915% ( 5) 00:07:08.046 10435.348 - 10485.760: 95.8257% ( 6) 00:07:08.046 10485.760 - 10536.172: 95.8714% ( 8) 00:07:08.046 10536.172 - 10586.585: 95.8999% ( 5) 00:07:08.046 10586.585 - 10636.997: 95.9284% ( 5) 00:07:08.046 10636.997 - 10687.409: 95.9569% ( 5) 00:07:08.046 10687.409 - 10737.822: 95.9854% ( 5) 00:07:08.046 10737.822 - 10788.234: 96.0139% ( 5) 00:07:08.046 10788.234 - 10838.646: 96.0310% ( 3) 00:07:08.046 10838.646 - 10889.058: 96.0709% ( 7) 00:07:08.046 10889.058 - 10939.471: 96.1109% ( 7) 00:07:08.046 10939.471 - 10989.883: 96.1907% ( 14) 00:07:08.046 10989.883 - 11040.295: 96.2534% ( 11) 00:07:08.046 11040.295 - 11090.708: 96.3219% ( 12) 00:07:08.046 11090.708 - 11141.120: 96.3504% ( 5) 00:07:08.046 11141.120 - 11191.532: 96.3561% ( 1) 00:07:08.046 11241.945 - 11292.357: 96.3675% ( 2) 00:07:08.046 11292.357 - 11342.769: 96.4473% ( 14) 00:07:08.046 11342.769 - 11393.182: 96.5271% ( 14) 00:07:08.046 11393.182 - 11443.594: 96.6355% ( 19) 00:07:08.046 11443.594 - 11494.006: 96.7552% ( 21) 00:07:08.046 11494.006 - 11544.418: 96.8978% ( 25) 00:07:08.046 11544.418 - 11594.831: 97.0632% ( 29) 00:07:08.046 11594.831 - 11645.243: 97.1886% ( 22) 00:07:08.046 11645.243 - 11695.655: 97.2514% ( 11) 00:07:08.046 11695.655 - 11746.068: 97.3198% ( 12) 00:07:08.046 11746.068 - 11796.480: 97.3996% ( 14) 00:07:08.046 11796.480 - 11846.892: 97.4738% ( 13) 00:07:08.046 11846.892 - 11897.305: 97.5365% ( 11) 00:07:08.047 11897.305 - 11947.717: 97.5935% ( 10) 00:07:08.047 11947.717 - 11998.129: 97.6562% ( 11) 00:07:08.047 11998.129 - 12048.542: 97.7190% ( 11) 00:07:08.047 12048.542 - 12098.954: 97.7817% ( 11) 00:07:08.047 12098.954 - 12149.366: 97.8387% ( 10) 00:07:08.047 12149.366 - 12199.778: 97.9243% ( 15) 00:07:08.047 12199.778 - 12250.191: 97.9756% ( 9) 00:07:08.047 12250.191 - 12300.603: 98.0269% ( 9) 00:07:08.047 12300.603 - 12351.015: 98.0725% ( 8) 00:07:08.047 12351.015 - 12401.428: 98.1125% ( 7) 00:07:08.047 12401.428 - 12451.840: 98.1467% ( 6) 00:07:08.047 12451.840 - 12502.252: 98.1866% ( 7) 00:07:08.047 12502.252 - 12552.665: 98.2436% ( 10) 00:07:08.047 12552.665 - 12603.077: 98.3234% ( 14) 00:07:08.047 12603.077 - 12653.489: 98.3976% ( 13) 00:07:08.047 12653.489 - 12703.902: 98.4774% ( 14) 00:07:08.047 12703.902 - 12754.314: 98.5173% ( 7) 00:07:08.047 12754.314 - 12804.726: 98.5458% ( 5) 00:07:08.047 12804.726 - 12855.138: 98.5630% ( 3) 00:07:08.047 12855.138 - 12905.551: 98.6542% ( 16) 00:07:08.047 12905.551 - 13006.375: 98.8367% ( 32) 00:07:08.047 13006.375 - 13107.200: 98.8652% ( 5) 00:07:08.047 13107.200 - 13208.025: 98.9051% ( 7) 00:07:08.047 13510.498 - 13611.323: 98.9108% ( 1) 00:07:08.047 13611.323 - 13712.148: 98.9165% ( 1) 00:07:08.047 13712.148 - 13812.972: 98.9450% ( 5) 00:07:08.047 13812.972 - 13913.797: 98.9849% ( 7) 00:07:08.047 13913.797 - 14014.622: 99.0306% ( 8) 00:07:08.047 14014.622 - 14115.446: 99.1332% ( 18) 00:07:08.047 14115.446 - 14216.271: 99.2130% ( 14) 00:07:08.047 14216.271 - 14317.095: 99.2701% ( 10) 00:07:08.047 20769.871 - 20870.695: 99.2815% ( 2) 00:07:08.047 20870.695 - 20971.520: 99.3043% ( 4) 00:07:08.047 20971.520 - 21072.345: 99.3271% ( 4) 00:07:08.047 21072.345 - 21173.169: 99.3442% ( 3) 00:07:08.047 21173.169 - 21273.994: 99.3670% ( 4) 00:07:08.047 21273.994 - 21374.818: 99.3841% ( 3) 00:07:08.047 21374.818 - 21475.643: 99.4012% ( 3) 00:07:08.047 21475.643 - 21576.468: 99.4126% ( 2) 00:07:08.047 21576.468 - 21677.292: 99.4297% ( 3) 00:07:08.047 21677.292 - 21778.117: 99.4469% ( 3) 00:07:08.047 21778.117 - 21878.942: 99.4583% ( 2) 00:07:08.047 21878.942 - 21979.766: 99.4754% ( 3) 00:07:08.047 21979.766 - 22080.591: 99.4868% ( 2) 00:07:08.047 22080.591 - 22181.415: 99.5039% ( 3) 00:07:08.047 22181.415 - 22282.240: 99.5267% ( 4) 00:07:08.047 22282.240 - 22383.065: 99.5495% ( 4) 00:07:08.047 22383.065 - 22483.889: 99.5666% ( 3) 00:07:08.047 22483.889 - 22584.714: 99.5951% ( 5) 00:07:08.047 22584.714 - 22685.538: 99.6179% ( 4) 00:07:08.047 22685.538 - 22786.363: 99.6350% ( 3) 00:07:08.047 28835.840 - 29037.489: 99.6407% ( 1) 00:07:08.047 29037.489 - 29239.138: 99.6750% ( 6) 00:07:08.047 29239.138 - 29440.788: 99.7035% ( 5) 00:07:08.047 29440.788 - 29642.437: 99.7320% ( 5) 00:07:08.047 29642.437 - 29844.086: 99.7605% ( 5) 00:07:08.047 29844.086 - 30045.735: 99.8061% ( 8) 00:07:08.047 30045.735 - 30247.385: 99.8460% ( 7) 00:07:08.047 30247.385 - 30449.034: 99.8917% ( 8) 00:07:08.047 30449.034 - 30650.683: 99.9373% ( 8) 00:07:08.047 30650.683 - 30852.332: 99.9886% ( 9) 00:07:08.047 30852.332 - 31053.982: 100.0000% ( 2) 00:07:08.047 00:07:08.047 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:08.047 ============================================================================== 00:07:08.047 Range in us Cumulative IO count 00:07:08.047 5797.415 - 5822.622: 0.0057% ( 1) 00:07:08.047 5847.828 - 5873.034: 0.0114% ( 1) 00:07:08.047 5873.034 - 5898.240: 0.0171% ( 1) 00:07:08.047 5898.240 - 5923.446: 0.0285% ( 2) 00:07:08.047 5923.446 - 5948.652: 0.0342% ( 1) 00:07:08.047 5999.065 - 6024.271: 0.0513% ( 3) 00:07:08.047 6024.271 - 6049.477: 0.0627% ( 2) 00:07:08.047 6049.477 - 6074.683: 0.1369% ( 13) 00:07:08.047 6074.683 - 6099.889: 0.1996% ( 11) 00:07:08.047 6099.889 - 6125.095: 0.3250% ( 22) 00:07:08.047 6125.095 - 6150.302: 0.5474% ( 39) 00:07:08.047 6150.302 - 6175.508: 0.8554% ( 54) 00:07:08.047 6175.508 - 6200.714: 1.1519% ( 52) 00:07:08.047 6200.714 - 6225.920: 1.5397% ( 68) 00:07:08.047 6225.920 - 6251.126: 2.2867% ( 131) 00:07:08.047 6251.126 - 6276.332: 3.1649% ( 154) 00:07:08.047 6276.332 - 6301.538: 3.9062% ( 130) 00:07:08.047 6301.538 - 6326.745: 4.7901% ( 155) 00:07:08.047 6326.745 - 6351.951: 5.6398% ( 149) 00:07:08.047 6351.951 - 6377.157: 6.8773% ( 217) 00:07:08.047 6377.157 - 6402.363: 7.9266% ( 184) 00:07:08.047 6402.363 - 6427.569: 9.0557% ( 198) 00:07:08.047 6427.569 - 6452.775: 10.8919% ( 322) 00:07:08.047 6452.775 - 6503.188: 14.6556% ( 660) 00:07:08.047 6503.188 - 6553.600: 19.2290% ( 802) 00:07:08.047 6553.600 - 6604.012: 23.7797% ( 798) 00:07:08.047 6604.012 - 6654.425: 29.3739% ( 981) 00:07:08.047 6654.425 - 6704.837: 35.8120% ( 1129) 00:07:08.047 6704.837 - 6755.249: 41.6629% ( 1026) 00:07:08.047 6755.249 - 6805.662: 48.0497% ( 1120) 00:07:08.047 6805.662 - 6856.074: 53.6724% ( 986) 00:07:08.047 6856.074 - 6906.486: 57.7441% ( 714) 00:07:08.047 6906.486 - 6956.898: 61.6788% ( 690) 00:07:08.047 6956.898 - 7007.311: 64.3362% ( 466) 00:07:08.047 7007.311 - 7057.723: 66.9537% ( 459) 00:07:08.047 7057.723 - 7108.135: 69.4001% ( 429) 00:07:08.047 7108.135 - 7158.548: 71.4815% ( 365) 00:07:08.047 7158.548 - 7208.960: 73.5458% ( 362) 00:07:08.047 7208.960 - 7259.372: 74.8118% ( 222) 00:07:08.047 7259.372 - 7309.785: 75.9922% ( 207) 00:07:08.047 7309.785 - 7360.197: 77.3951% ( 246) 00:07:08.047 7360.197 - 7410.609: 78.4557% ( 186) 00:07:08.047 7410.609 - 7461.022: 79.2427% ( 138) 00:07:08.047 7461.022 - 7511.434: 80.1323% ( 156) 00:07:08.047 7511.434 - 7561.846: 81.0219% ( 156) 00:07:08.047 7561.846 - 7612.258: 82.1738% ( 202) 00:07:08.047 7612.258 - 7662.671: 82.9608% ( 138) 00:07:08.047 7662.671 - 7713.083: 83.7192% ( 133) 00:07:08.047 7713.083 - 7763.495: 84.3351% ( 108) 00:07:08.047 7763.495 - 7813.908: 84.9396% ( 106) 00:07:08.047 7813.908 - 7864.320: 85.3387% ( 70) 00:07:08.047 7864.320 - 7914.732: 85.7208% ( 67) 00:07:08.047 7914.732 - 7965.145: 86.2340% ( 90) 00:07:08.047 7965.145 - 8015.557: 86.6674% ( 76) 00:07:08.047 8015.557 - 8065.969: 87.0438% ( 66) 00:07:08.047 8065.969 - 8116.382: 87.2947% ( 44) 00:07:08.047 8116.382 - 8166.794: 87.6255% ( 58) 00:07:08.047 8166.794 - 8217.206: 87.9961% ( 65) 00:07:08.047 8217.206 - 8267.618: 88.2584% ( 46) 00:07:08.047 8267.618 - 8318.031: 88.5322% ( 48) 00:07:08.047 8318.031 - 8368.443: 88.8344% ( 53) 00:07:08.047 8368.443 - 8418.855: 89.2108% ( 66) 00:07:08.047 8418.855 - 8469.268: 89.6214% ( 72) 00:07:08.047 8469.268 - 8519.680: 90.2030% ( 102) 00:07:08.047 8519.680 - 8570.092: 90.8588% ( 115) 00:07:08.047 8570.092 - 8620.505: 91.2295% ( 65) 00:07:08.047 8620.505 - 8670.917: 91.5431% ( 55) 00:07:08.047 8670.917 - 8721.329: 91.8282% ( 50) 00:07:08.047 8721.329 - 8771.742: 92.1020% ( 48) 00:07:08.047 8771.742 - 8822.154: 92.3415% ( 42) 00:07:08.047 8822.154 - 8872.566: 92.5011% ( 28) 00:07:08.047 8872.566 - 8922.978: 92.6608% ( 28) 00:07:08.047 8922.978 - 8973.391: 92.8148% ( 27) 00:07:08.047 8973.391 - 9023.803: 93.0315% ( 38) 00:07:08.048 9023.803 - 9074.215: 93.2539% ( 39) 00:07:08.048 9074.215 - 9124.628: 93.4250% ( 30) 00:07:08.048 9124.628 - 9175.040: 93.5048% ( 14) 00:07:08.048 9175.040 - 9225.452: 93.5903% ( 15) 00:07:08.048 9225.452 - 9275.865: 93.6417% ( 9) 00:07:08.048 9275.865 - 9326.277: 93.6930% ( 9) 00:07:08.048 9326.277 - 9376.689: 93.7443% ( 9) 00:07:08.048 9376.689 - 9427.102: 93.7899% ( 8) 00:07:08.048 9427.102 - 9477.514: 93.8355% ( 8) 00:07:08.048 9477.514 - 9527.926: 93.8698% ( 6) 00:07:08.048 9527.926 - 9578.338: 93.8983% ( 5) 00:07:08.048 9578.338 - 9628.751: 93.9211% ( 4) 00:07:08.048 9628.751 - 9679.163: 93.9496% ( 5) 00:07:08.048 9679.163 - 9729.575: 94.0180% ( 12) 00:07:08.048 9729.575 - 9779.988: 94.0522% ( 6) 00:07:08.048 9779.988 - 9830.400: 94.1093% ( 10) 00:07:08.048 9830.400 - 9880.812: 94.1606% ( 9) 00:07:08.048 9880.812 - 9931.225: 94.2518% ( 16) 00:07:08.048 9931.225 - 9981.637: 94.3773% ( 22) 00:07:08.048 9981.637 - 10032.049: 94.4685% ( 16) 00:07:08.048 10032.049 - 10082.462: 94.5769% ( 19) 00:07:08.048 10082.462 - 10132.874: 94.7251% ( 26) 00:07:08.048 10132.874 - 10183.286: 94.9760% ( 44) 00:07:08.048 10183.286 - 10233.698: 95.2498% ( 48) 00:07:08.048 10233.698 - 10284.111: 95.4323% ( 32) 00:07:08.048 10284.111 - 10334.523: 95.5463% ( 20) 00:07:08.048 10334.523 - 10384.935: 95.6204% ( 13) 00:07:08.048 10384.935 - 10435.348: 95.6661% ( 8) 00:07:08.048 10435.348 - 10485.760: 95.7174% ( 9) 00:07:08.048 10485.760 - 10536.172: 95.7744% ( 10) 00:07:08.048 10536.172 - 10586.585: 95.8257% ( 9) 00:07:08.048 10586.585 - 10636.997: 95.8714% ( 8) 00:07:08.048 10636.997 - 10687.409: 95.8828% ( 2) 00:07:08.048 10687.409 - 10737.822: 95.9056% ( 4) 00:07:08.048 10737.822 - 10788.234: 95.9398% ( 6) 00:07:08.048 10788.234 - 10838.646: 95.9797% ( 7) 00:07:08.048 10838.646 - 10889.058: 96.0253% ( 8) 00:07:08.048 10889.058 - 10939.471: 96.0880% ( 11) 00:07:08.048 10939.471 - 10989.883: 96.1565% ( 12) 00:07:08.048 10989.883 - 11040.295: 96.2249% ( 12) 00:07:08.048 11040.295 - 11090.708: 96.2933% ( 12) 00:07:08.048 11090.708 - 11141.120: 96.3333% ( 7) 00:07:08.048 11141.120 - 11191.532: 96.4473% ( 20) 00:07:08.048 11191.532 - 11241.945: 96.6241% ( 31) 00:07:08.048 11241.945 - 11292.357: 96.7838% ( 28) 00:07:08.048 11292.357 - 11342.769: 96.8921% ( 19) 00:07:08.048 11342.769 - 11393.182: 97.0290% ( 24) 00:07:08.048 11393.182 - 11443.594: 97.1487% ( 21) 00:07:08.048 11443.594 - 11494.006: 97.2343% ( 15) 00:07:08.048 11494.006 - 11544.418: 97.3369% ( 18) 00:07:08.048 11544.418 - 11594.831: 97.3939% ( 10) 00:07:08.048 11594.831 - 11645.243: 97.4795% ( 15) 00:07:08.048 11645.243 - 11695.655: 97.5080% ( 5) 00:07:08.048 11695.655 - 11746.068: 97.5479% ( 7) 00:07:08.048 11746.068 - 11796.480: 97.5878% ( 7) 00:07:08.048 11796.480 - 11846.892: 97.6163% ( 5) 00:07:08.048 11846.892 - 11897.305: 97.6448% ( 5) 00:07:08.048 11897.305 - 11947.717: 97.6734% ( 5) 00:07:08.048 11947.717 - 11998.129: 97.7019% ( 5) 00:07:08.048 11998.129 - 12048.542: 97.7418% ( 7) 00:07:08.048 12048.542 - 12098.954: 97.7817% ( 7) 00:07:08.048 12098.954 - 12149.366: 97.8216% ( 7) 00:07:08.048 12149.366 - 12199.778: 97.8901% ( 12) 00:07:08.048 12199.778 - 12250.191: 97.9357% ( 8) 00:07:08.048 12250.191 - 12300.603: 97.9813% ( 8) 00:07:08.048 12300.603 - 12351.015: 98.1068% ( 22) 00:07:08.048 12351.015 - 12401.428: 98.1410% ( 6) 00:07:08.048 12401.428 - 12451.840: 98.1638% ( 4) 00:07:08.048 12451.840 - 12502.252: 98.1980% ( 6) 00:07:08.048 12502.252 - 12552.665: 98.2322% ( 6) 00:07:08.048 12552.665 - 12603.077: 98.2664% ( 6) 00:07:08.048 12603.077 - 12653.489: 98.2835% ( 3) 00:07:08.048 12653.489 - 12703.902: 98.3063% ( 4) 00:07:08.048 12703.902 - 12754.314: 98.3234% ( 3) 00:07:08.048 12754.314 - 12804.726: 98.3520% ( 5) 00:07:08.048 12804.726 - 12855.138: 98.3976% ( 8) 00:07:08.048 12855.138 - 12905.551: 98.4432% ( 8) 00:07:08.048 12905.551 - 13006.375: 98.6029% ( 28) 00:07:08.048 13006.375 - 13107.200: 98.7740% ( 30) 00:07:08.048 13107.200 - 13208.025: 98.8196% ( 8) 00:07:08.048 13208.025 - 13308.849: 98.8424% ( 4) 00:07:08.048 13308.849 - 13409.674: 98.9108% ( 12) 00:07:08.048 13409.674 - 13510.498: 98.9792% ( 12) 00:07:08.048 13510.498 - 13611.323: 99.0249% ( 8) 00:07:08.048 13611.323 - 13712.148: 99.0648% ( 7) 00:07:08.048 13712.148 - 13812.972: 99.1902% ( 22) 00:07:08.048 13812.972 - 13913.797: 99.2416% ( 9) 00:07:08.048 13913.797 - 14014.622: 99.2644% ( 4) 00:07:08.048 14014.622 - 14115.446: 99.2701% ( 1) 00:07:08.048 20669.046 - 20769.871: 99.2872% ( 3) 00:07:08.048 20769.871 - 20870.695: 99.3328% ( 8) 00:07:08.048 20870.695 - 20971.520: 99.3670% ( 6) 00:07:08.048 20971.520 - 21072.345: 99.3955% ( 5) 00:07:08.048 21072.345 - 21173.169: 99.4240% ( 5) 00:07:08.048 21173.169 - 21273.994: 99.4469% ( 4) 00:07:08.048 21273.994 - 21374.818: 99.4697% ( 4) 00:07:08.048 21374.818 - 21475.643: 99.4982% ( 5) 00:07:08.048 21475.643 - 21576.468: 99.5153% ( 3) 00:07:08.048 21576.468 - 21677.292: 99.5267% ( 2) 00:07:08.048 21677.292 - 21778.117: 99.5438% ( 3) 00:07:08.048 21778.117 - 21878.942: 99.5495% ( 1) 00:07:08.048 21878.942 - 21979.766: 99.5666% ( 3) 00:07:08.048 21979.766 - 22080.591: 99.5780% ( 2) 00:07:08.048 22080.591 - 22181.415: 99.5894% ( 2) 00:07:08.048 22181.415 - 22282.240: 99.6008% ( 2) 00:07:08.048 22282.240 - 22383.065: 99.6122% ( 2) 00:07:08.048 22383.065 - 22483.889: 99.6293% ( 3) 00:07:08.048 22483.889 - 22584.714: 99.6350% ( 1) 00:07:08.048 27625.945 - 27827.594: 99.6635% ( 5) 00:07:08.048 27827.594 - 28029.243: 99.6864% ( 4) 00:07:08.048 28029.243 - 28230.892: 99.7092% ( 4) 00:07:08.048 28230.892 - 28432.542: 99.7263% ( 3) 00:07:08.048 28432.542 - 28634.191: 99.7662% ( 7) 00:07:08.048 28634.191 - 28835.840: 99.8061% ( 7) 00:07:08.048 28835.840 - 29037.489: 99.8232% ( 3) 00:07:08.048 29037.489 - 29239.138: 99.8460% ( 4) 00:07:08.048 29239.138 - 29440.788: 99.8745% ( 5) 00:07:08.048 29440.788 - 29642.437: 99.9031% ( 5) 00:07:08.048 29642.437 - 29844.086: 99.9259% ( 4) 00:07:08.048 29844.086 - 30045.735: 99.9544% ( 5) 00:07:08.048 30045.735 - 30247.385: 99.9829% ( 5) 00:07:08.048 30247.385 - 30449.034: 100.0000% ( 3) 00:07:08.048 00:07:08.048 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:08.048 ============================================================================== 00:07:08.048 Range in us Cumulative IO count 00:07:08.048 5847.828 - 5873.034: 0.0057% ( 1) 00:07:08.048 5898.240 - 5923.446: 0.0114% ( 1) 00:07:08.048 5973.858 - 5999.065: 0.0342% ( 4) 00:07:08.048 5999.065 - 6024.271: 0.0741% ( 7) 00:07:08.048 6024.271 - 6049.477: 0.1255% ( 9) 00:07:08.048 6049.477 - 6074.683: 0.1768% ( 9) 00:07:08.048 6074.683 - 6099.889: 0.2623% ( 15) 00:07:08.048 6099.889 - 6125.095: 0.3764% ( 20) 00:07:08.048 6125.095 - 6150.302: 0.5189% ( 25) 00:07:08.048 6150.302 - 6175.508: 0.7870% ( 47) 00:07:08.048 6175.508 - 6200.714: 1.0379% ( 44) 00:07:08.048 6200.714 - 6225.920: 1.3914% ( 62) 00:07:08.048 6225.920 - 6251.126: 1.8590% ( 82) 00:07:08.048 6251.126 - 6276.332: 2.4635% ( 106) 00:07:08.048 6276.332 - 6301.538: 3.3987% ( 164) 00:07:08.048 6301.538 - 6326.745: 4.2598% ( 151) 00:07:08.048 6326.745 - 6351.951: 5.1323% ( 153) 00:07:08.048 6351.951 - 6377.157: 6.2158% ( 190) 00:07:08.048 6377.157 - 6402.363: 7.6984% ( 260) 00:07:08.048 6402.363 - 6427.569: 9.2724% ( 276) 00:07:08.048 6427.569 - 6452.775: 11.0287% ( 308) 00:07:08.048 6452.775 - 6503.188: 14.4161% ( 594) 00:07:08.048 6503.188 - 6553.600: 18.4135% ( 701) 00:07:08.048 6553.600 - 6604.012: 23.3691% ( 869) 00:07:08.048 6604.012 - 6654.425: 28.7067% ( 936) 00:07:08.048 6654.425 - 6704.837: 34.1127% ( 948) 00:07:08.048 6704.837 - 6755.249: 41.4462% ( 1286) 00:07:08.048 6755.249 - 6805.662: 47.8844% ( 1129) 00:07:08.048 6805.662 - 6856.074: 54.0146% ( 1075) 00:07:08.048 6856.074 - 6906.486: 58.8447% ( 847) 00:07:08.048 6906.486 - 6956.898: 62.5912% ( 657) 00:07:08.048 6956.898 - 7007.311: 65.6478% ( 536) 00:07:08.048 7007.311 - 7057.723: 68.6816% ( 532) 00:07:08.048 7057.723 - 7108.135: 70.8428% ( 379) 00:07:08.048 7108.135 - 7158.548: 72.8102% ( 345) 00:07:08.048 7158.548 - 7208.960: 74.0192% ( 212) 00:07:08.049 7208.960 - 7259.372: 75.0399% ( 179) 00:07:08.049 7259.372 - 7309.785: 76.0721% ( 181) 00:07:08.049 7309.785 - 7360.197: 76.9959% ( 162) 00:07:08.049 7360.197 - 7410.609: 77.9653% ( 170) 00:07:08.049 7410.609 - 7461.022: 79.2028% ( 217) 00:07:08.049 7461.022 - 7511.434: 80.0924% ( 156) 00:07:08.049 7511.434 - 7561.846: 80.9934% ( 158) 00:07:08.049 7561.846 - 7612.258: 81.7860% ( 139) 00:07:08.049 7612.258 - 7662.671: 82.5331% ( 131) 00:07:08.049 7662.671 - 7713.083: 83.1604% ( 110) 00:07:08.049 7713.083 - 7763.495: 83.6166% ( 80) 00:07:08.049 7763.495 - 7813.908: 84.0842% ( 82) 00:07:08.049 7813.908 - 7864.320: 84.8939% ( 142) 00:07:08.049 7864.320 - 7914.732: 85.4870% ( 104) 00:07:08.049 7914.732 - 7965.145: 86.0972% ( 107) 00:07:08.049 7965.145 - 8015.557: 86.5306% ( 76) 00:07:08.049 8015.557 - 8065.969: 87.0495% ( 91) 00:07:08.049 8065.969 - 8116.382: 87.8307% ( 137) 00:07:08.049 8116.382 - 8166.794: 88.2356% ( 71) 00:07:08.049 8166.794 - 8217.206: 88.5949% ( 63) 00:07:08.049 8217.206 - 8267.618: 88.8515% ( 45) 00:07:08.049 8267.618 - 8318.031: 89.3191% ( 82) 00:07:08.049 8318.031 - 8368.443: 89.7924% ( 83) 00:07:08.049 8368.443 - 8418.855: 90.1745% ( 67) 00:07:08.049 8418.855 - 8469.268: 90.4710% ( 52) 00:07:08.049 8469.268 - 8519.680: 90.8018% ( 58) 00:07:08.049 8519.680 - 8570.092: 91.2295% ( 75) 00:07:08.049 8570.092 - 8620.505: 91.5716% ( 60) 00:07:08.049 8620.505 - 8670.917: 91.8225% ( 44) 00:07:08.049 8670.917 - 8721.329: 92.0906% ( 47) 00:07:08.049 8721.329 - 8771.742: 92.2844% ( 34) 00:07:08.049 8771.742 - 8822.154: 92.4270% ( 25) 00:07:08.049 8822.154 - 8872.566: 92.5810% ( 27) 00:07:08.049 8872.566 - 8922.978: 92.7235% ( 25) 00:07:08.049 8922.978 - 8973.391: 92.9003% ( 31) 00:07:08.049 8973.391 - 9023.803: 93.2425% ( 60) 00:07:08.049 9023.803 - 9074.215: 93.4535% ( 37) 00:07:08.049 9074.215 - 9124.628: 93.6474% ( 34) 00:07:08.049 9124.628 - 9175.040: 93.7158% ( 12) 00:07:08.049 9175.040 - 9225.452: 93.7785% ( 11) 00:07:08.049 9225.452 - 9275.865: 93.7956% ( 3) 00:07:08.049 9477.514 - 9527.926: 93.8127% ( 3) 00:07:08.049 9527.926 - 9578.338: 93.8298% ( 3) 00:07:08.049 9578.338 - 9628.751: 93.8526% ( 4) 00:07:08.049 9628.751 - 9679.163: 93.8755% ( 4) 00:07:08.049 9679.163 - 9729.575: 93.8983% ( 4) 00:07:08.049 9729.575 - 9779.988: 93.9211% ( 4) 00:07:08.049 9779.988 - 9830.400: 93.9382% ( 3) 00:07:08.049 9830.400 - 9880.812: 93.9781% ( 7) 00:07:08.049 9880.812 - 9931.225: 94.0408% ( 11) 00:07:08.049 9931.225 - 9981.637: 94.0979% ( 10) 00:07:08.049 9981.637 - 10032.049: 94.1606% ( 11) 00:07:08.049 10032.049 - 10082.462: 94.3488% ( 33) 00:07:08.049 10082.462 - 10132.874: 94.4685% ( 21) 00:07:08.049 10132.874 - 10183.286: 94.7137% ( 43) 00:07:08.049 10183.286 - 10233.698: 94.7993% ( 15) 00:07:08.049 10233.698 - 10284.111: 94.8791% ( 14) 00:07:08.049 10284.111 - 10334.523: 94.9532% ( 13) 00:07:08.049 10334.523 - 10384.935: 95.0160% ( 11) 00:07:08.049 10384.935 - 10435.348: 95.0901% ( 13) 00:07:08.049 10435.348 - 10485.760: 95.1870% ( 17) 00:07:08.049 10485.760 - 10536.172: 95.3581% ( 30) 00:07:08.049 10536.172 - 10586.585: 95.4665% ( 19) 00:07:08.049 10586.585 - 10636.997: 95.5577% ( 16) 00:07:08.049 10636.997 - 10687.409: 95.6718% ( 20) 00:07:08.049 10687.409 - 10737.822: 95.8542% ( 32) 00:07:08.049 10737.822 - 10788.234: 96.0367% ( 32) 00:07:08.049 10788.234 - 10838.646: 96.2192% ( 32) 00:07:08.049 10838.646 - 10889.058: 96.3047% ( 15) 00:07:08.049 10889.058 - 10939.471: 96.4188% ( 20) 00:07:08.049 10939.471 - 10989.883: 96.4872% ( 12) 00:07:08.049 10989.883 - 11040.295: 96.5443% ( 10) 00:07:08.049 11040.295 - 11090.708: 96.6127% ( 12) 00:07:08.049 11090.708 - 11141.120: 96.6868% ( 13) 00:07:08.049 11141.120 - 11191.532: 96.7552% ( 12) 00:07:08.049 11191.532 - 11241.945: 96.8465% ( 16) 00:07:08.049 11241.945 - 11292.357: 96.9377% ( 16) 00:07:08.049 11292.357 - 11342.769: 97.0347% ( 17) 00:07:08.049 11342.769 - 11393.182: 97.0917% ( 10) 00:07:08.049 11393.182 - 11443.594: 97.1316% ( 7) 00:07:08.049 11443.594 - 11494.006: 97.1658% ( 6) 00:07:08.049 11494.006 - 11544.418: 97.2000% ( 6) 00:07:08.049 11544.418 - 11594.831: 97.2343% ( 6) 00:07:08.049 11594.831 - 11645.243: 97.2742% ( 7) 00:07:08.049 11645.243 - 11695.655: 97.3141% ( 7) 00:07:08.049 11695.655 - 11746.068: 97.3654% ( 9) 00:07:08.049 11746.068 - 11796.480: 97.4110% ( 8) 00:07:08.049 11796.480 - 11846.892: 97.4281% ( 3) 00:07:08.049 11846.892 - 11897.305: 97.4624% ( 6) 00:07:08.049 11897.305 - 11947.717: 97.5023% ( 7) 00:07:08.049 11947.717 - 11998.129: 97.5194% ( 3) 00:07:08.049 11998.129 - 12048.542: 97.5536% ( 6) 00:07:08.049 12048.542 - 12098.954: 97.6049% ( 9) 00:07:08.049 12098.954 - 12149.366: 97.6505% ( 8) 00:07:08.049 12149.366 - 12199.778: 97.7361% ( 15) 00:07:08.049 12199.778 - 12250.191: 97.7646% ( 5) 00:07:08.049 12250.191 - 12300.603: 97.8102% ( 8) 00:07:08.049 12300.603 - 12351.015: 97.8729% ( 11) 00:07:08.049 12351.015 - 12401.428: 97.9243% ( 9) 00:07:08.049 12401.428 - 12451.840: 97.9699% ( 8) 00:07:08.049 12451.840 - 12502.252: 98.0212% ( 9) 00:07:08.049 12502.252 - 12552.665: 98.0782% ( 10) 00:07:08.049 12552.665 - 12603.077: 98.1695% ( 16) 00:07:08.049 12603.077 - 12653.489: 98.2607% ( 16) 00:07:08.049 12653.489 - 12703.902: 98.3577% ( 17) 00:07:08.049 12703.902 - 12754.314: 98.4147% ( 10) 00:07:08.049 12754.314 - 12804.726: 98.4774% ( 11) 00:07:08.049 12804.726 - 12855.138: 98.5516% ( 13) 00:07:08.049 12855.138 - 12905.551: 98.6485% ( 17) 00:07:08.049 12905.551 - 13006.375: 98.7340% ( 15) 00:07:08.049 13006.375 - 13107.200: 98.7911% ( 10) 00:07:08.049 13107.200 - 13208.025: 98.8310% ( 7) 00:07:08.049 13208.025 - 13308.849: 98.8538% ( 4) 00:07:08.049 13308.849 - 13409.674: 98.8709% ( 3) 00:07:08.049 13409.674 - 13510.498: 98.8937% ( 4) 00:07:08.049 13510.498 - 13611.323: 98.9279% ( 6) 00:07:08.049 13611.323 - 13712.148: 98.9678% ( 7) 00:07:08.049 13712.148 - 13812.972: 99.0135% ( 8) 00:07:08.049 13812.972 - 13913.797: 99.0477% ( 6) 00:07:08.049 13913.797 - 14014.622: 99.1332% ( 15) 00:07:08.049 14014.622 - 14115.446: 99.2016% ( 12) 00:07:08.049 14115.446 - 14216.271: 99.2416% ( 7) 00:07:08.049 14216.271 - 14317.095: 99.2701% ( 5) 00:07:08.049 19963.274 - 20064.098: 99.2872% ( 3) 00:07:08.049 20064.098 - 20164.923: 99.3100% ( 4) 00:07:08.049 20164.923 - 20265.748: 99.3385% ( 5) 00:07:08.049 20265.748 - 20366.572: 99.3613% ( 4) 00:07:08.049 20366.572 - 20467.397: 99.3898% ( 5) 00:07:08.049 20467.397 - 20568.222: 99.4354% ( 8) 00:07:08.049 20568.222 - 20669.046: 99.4754% ( 7) 00:07:08.050 20669.046 - 20769.871: 99.4868% ( 2) 00:07:08.050 20769.871 - 20870.695: 99.4982% ( 2) 00:07:08.050 20870.695 - 20971.520: 99.5210% ( 4) 00:07:08.050 20971.520 - 21072.345: 99.5381% ( 3) 00:07:08.050 21072.345 - 21173.169: 99.5552% ( 3) 00:07:08.050 21173.169 - 21273.994: 99.5723% ( 3) 00:07:08.050 21273.994 - 21374.818: 99.5894% ( 3) 00:07:08.050 21374.818 - 21475.643: 99.6065% ( 3) 00:07:08.050 21475.643 - 21576.468: 99.6293% ( 4) 00:07:08.050 21576.468 - 21677.292: 99.6350% ( 1) 00:07:08.050 27020.997 - 27222.646: 99.6521% ( 3) 00:07:08.050 27222.646 - 27424.295: 99.6693% ( 3) 00:07:08.050 27424.295 - 27625.945: 99.7149% ( 8) 00:07:08.050 28029.243 - 28230.892: 99.7377% ( 4) 00:07:08.050 28230.892 - 28432.542: 99.7548% ( 3) 00:07:08.050 28432.542 - 28634.191: 99.7776% ( 4) 00:07:08.050 28634.191 - 28835.840: 99.8004% ( 4) 00:07:08.050 28835.840 - 29037.489: 99.8460% ( 8) 00:07:08.050 29037.489 - 29239.138: 99.8974% ( 9) 00:07:08.050 29239.138 - 29440.788: 99.9544% ( 10) 00:07:08.050 29440.788 - 29642.437: 100.0000% ( 8) 00:07:08.050 00:07:08.050 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:08.050 ============================================================================== 00:07:08.050 Range in us Cumulative IO count 00:07:08.050 5772.209 - 5797.415: 0.0057% ( 1) 00:07:08.050 5797.415 - 5822.622: 0.0114% ( 1) 00:07:08.050 5898.240 - 5923.446: 0.0228% ( 2) 00:07:08.050 5999.065 - 6024.271: 0.0342% ( 2) 00:07:08.050 6024.271 - 6049.477: 0.0627% ( 5) 00:07:08.050 6049.477 - 6074.683: 0.1141% ( 9) 00:07:08.050 6074.683 - 6099.889: 0.2110% ( 17) 00:07:08.050 6099.889 - 6125.095: 0.3764% ( 29) 00:07:08.050 6125.095 - 6150.302: 0.5474% ( 30) 00:07:08.050 6150.302 - 6175.508: 0.7984% ( 44) 00:07:08.050 6175.508 - 6200.714: 1.3116% ( 90) 00:07:08.050 6200.714 - 6225.920: 1.8134% ( 88) 00:07:08.050 6225.920 - 6251.126: 2.2411% ( 75) 00:07:08.050 6251.126 - 6276.332: 2.8741% ( 111) 00:07:08.050 6276.332 - 6301.538: 3.5698% ( 122) 00:07:08.050 6301.538 - 6326.745: 4.3111% ( 130) 00:07:08.050 6326.745 - 6351.951: 5.4288% ( 196) 00:07:08.050 6351.951 - 6377.157: 6.4952% ( 187) 00:07:08.050 6377.157 - 6402.363: 7.5958% ( 193) 00:07:08.050 6402.363 - 6427.569: 8.6907% ( 192) 00:07:08.050 6427.569 - 6452.775: 10.3672% ( 294) 00:07:08.050 6452.775 - 6503.188: 13.8002% ( 602) 00:07:08.050 6503.188 - 6553.600: 18.4135% ( 809) 00:07:08.050 6553.600 - 6604.012: 22.8844% ( 784) 00:07:08.050 6604.012 - 6654.425: 28.5185% ( 988) 00:07:08.050 6654.425 - 6704.837: 34.8654% ( 1113) 00:07:08.050 6704.837 - 6755.249: 42.1647% ( 1280) 00:07:08.050 6755.249 - 6805.662: 47.8558% ( 998) 00:07:08.050 6805.662 - 6856.074: 53.1364% ( 926) 00:07:08.050 6856.074 - 6906.486: 58.5253% ( 945) 00:07:08.050 6906.486 - 6956.898: 63.0474% ( 793) 00:07:08.050 6956.898 - 7007.311: 66.6800% ( 637) 00:07:08.050 7007.311 - 7057.723: 69.1036% ( 425) 00:07:08.050 7057.723 - 7108.135: 70.9854% ( 330) 00:07:08.050 7108.135 - 7158.548: 73.1353% ( 377) 00:07:08.050 7158.548 - 7208.960: 74.4526% ( 231) 00:07:08.050 7208.960 - 7259.372: 75.4448% ( 174) 00:07:08.050 7259.372 - 7309.785: 76.2603% ( 143) 00:07:08.050 7309.785 - 7360.197: 77.1442% ( 155) 00:07:08.050 7360.197 - 7410.609: 77.9254% ( 137) 00:07:08.050 7410.609 - 7461.022: 78.4729% ( 96) 00:07:08.050 7461.022 - 7511.434: 79.2712% ( 140) 00:07:08.050 7511.434 - 7561.846: 79.9612% ( 121) 00:07:08.050 7561.846 - 7612.258: 80.9250% ( 169) 00:07:08.050 7612.258 - 7662.671: 82.1168% ( 209) 00:07:08.050 7662.671 - 7713.083: 83.0919% ( 171) 00:07:08.050 7713.083 - 7763.495: 83.8390% ( 131) 00:07:08.050 7763.495 - 7813.908: 84.4776% ( 112) 00:07:08.050 7813.908 - 7864.320: 84.9281% ( 79) 00:07:08.050 7864.320 - 7914.732: 85.4015% ( 83) 00:07:08.050 7914.732 - 7965.145: 86.1086% ( 124) 00:07:08.050 7965.145 - 8015.557: 86.6218% ( 90) 00:07:08.050 8015.557 - 8065.969: 86.9982% ( 66) 00:07:08.050 8065.969 - 8116.382: 87.3517% ( 62) 00:07:08.050 8116.382 - 8166.794: 87.7737% ( 74) 00:07:08.050 8166.794 - 8217.206: 88.2128% ( 77) 00:07:08.050 8217.206 - 8267.618: 88.6405% ( 75) 00:07:08.050 8267.618 - 8318.031: 89.1708% ( 93) 00:07:08.050 8318.031 - 8368.443: 89.6955% ( 92) 00:07:08.050 8368.443 - 8418.855: 90.1061% ( 72) 00:07:08.050 8418.855 - 8469.268: 90.5395% ( 76) 00:07:08.050 8469.268 - 8519.680: 91.0071% ( 82) 00:07:08.050 8519.680 - 8570.092: 91.2580% ( 44) 00:07:08.050 8570.092 - 8620.505: 91.6458% ( 68) 00:07:08.050 8620.505 - 8670.917: 91.8453% ( 35) 00:07:08.050 8670.917 - 8721.329: 92.1476% ( 53) 00:07:08.050 8721.329 - 8771.742: 92.2673% ( 21) 00:07:08.050 8771.742 - 8822.154: 92.3700% ( 18) 00:07:08.050 8822.154 - 8872.566: 92.4783% ( 19) 00:07:08.050 8872.566 - 8922.978: 92.7749% ( 52) 00:07:08.050 8922.978 - 8973.391: 92.8604% ( 15) 00:07:08.050 8973.391 - 9023.803: 93.1227% ( 46) 00:07:08.050 9023.803 - 9074.215: 93.1911% ( 12) 00:07:08.050 9074.215 - 9124.628: 93.2995% ( 19) 00:07:08.050 9124.628 - 9175.040: 93.4991% ( 35) 00:07:08.050 9175.040 - 9225.452: 93.7101% ( 37) 00:07:08.050 9225.452 - 9275.865: 93.7671% ( 10) 00:07:08.050 9275.865 - 9326.277: 93.8241% ( 10) 00:07:08.050 9326.277 - 9376.689: 93.8698% ( 8) 00:07:08.050 9376.689 - 9427.102: 93.9040% ( 6) 00:07:08.050 9427.102 - 9477.514: 93.9325% ( 5) 00:07:08.050 9477.514 - 9527.926: 93.9553% ( 4) 00:07:08.050 9527.926 - 9578.338: 93.9781% ( 4) 00:07:08.050 9578.338 - 9628.751: 93.9952% ( 3) 00:07:08.050 9628.751 - 9679.163: 94.0180% ( 4) 00:07:08.050 9679.163 - 9729.575: 94.0408% ( 4) 00:07:08.050 9729.575 - 9779.988: 94.0579% ( 3) 00:07:08.050 9779.988 - 9830.400: 94.0750% ( 3) 00:07:08.050 9830.400 - 9880.812: 94.0922% ( 3) 00:07:08.050 9880.812 - 9931.225: 94.1264% ( 6) 00:07:08.050 9931.225 - 9981.637: 94.1720% ( 8) 00:07:08.050 9981.637 - 10032.049: 94.2176% ( 8) 00:07:08.050 10032.049 - 10082.462: 94.2746% ( 10) 00:07:08.050 10082.462 - 10132.874: 94.3887% ( 20) 00:07:08.050 10132.874 - 10183.286: 94.5198% ( 23) 00:07:08.050 10183.286 - 10233.698: 94.7651% ( 43) 00:07:08.050 10233.698 - 10284.111: 94.9133% ( 26) 00:07:08.050 10284.111 - 10334.523: 95.0730% ( 28) 00:07:08.050 10334.523 - 10384.935: 95.2669% ( 34) 00:07:08.050 10384.935 - 10435.348: 95.4380% ( 30) 00:07:08.050 10435.348 - 10485.760: 95.5976% ( 28) 00:07:08.050 10485.760 - 10536.172: 95.6889% ( 16) 00:07:08.050 10536.172 - 10586.585: 95.7687% ( 14) 00:07:08.050 10586.585 - 10636.997: 95.8599% ( 16) 00:07:08.050 10636.997 - 10687.409: 95.9341% ( 13) 00:07:08.050 10687.409 - 10737.822: 96.0538% ( 21) 00:07:08.050 10737.822 - 10788.234: 96.1793% ( 22) 00:07:08.050 10788.234 - 10838.646: 96.4074% ( 40) 00:07:08.050 10838.646 - 10889.058: 96.5500% ( 25) 00:07:08.050 10889.058 - 10939.471: 96.6355% ( 15) 00:07:08.050 10939.471 - 10989.883: 96.7267% ( 16) 00:07:08.050 10989.883 - 11040.295: 96.9149% ( 33) 00:07:08.050 11040.295 - 11090.708: 97.0746% ( 28) 00:07:08.050 11090.708 - 11141.120: 97.1601% ( 15) 00:07:08.050 11141.120 - 11191.532: 97.2000% ( 7) 00:07:08.050 11191.532 - 11241.945: 97.2286% ( 5) 00:07:08.050 11241.945 - 11292.357: 97.2571% ( 5) 00:07:08.050 11292.357 - 11342.769: 97.2799% ( 4) 00:07:08.050 11342.769 - 11393.182: 97.2913% ( 2) 00:07:08.050 11393.182 - 11443.594: 97.2970% ( 1) 00:07:08.050 11443.594 - 11494.006: 97.3084% ( 2) 00:07:08.050 11494.006 - 11544.418: 97.3198% ( 2) 00:07:08.051 11544.418 - 11594.831: 97.3255% ( 1) 00:07:08.051 11594.831 - 11645.243: 97.3369% ( 2) 00:07:08.051 11645.243 - 11695.655: 97.3483% ( 2) 00:07:08.051 11695.655 - 11746.068: 97.3540% ( 1) 00:07:08.051 11746.068 - 11796.480: 97.3654% ( 2) 00:07:08.051 11796.480 - 11846.892: 97.3768% ( 2) 00:07:08.051 11846.892 - 11897.305: 97.3825% ( 1) 00:07:08.051 11897.305 - 11947.717: 97.3939% ( 2) 00:07:08.051 11947.717 - 11998.129: 97.3996% ( 1) 00:07:08.051 11998.129 - 12048.542: 97.4224% ( 4) 00:07:08.051 12048.542 - 12098.954: 97.4339% ( 2) 00:07:08.051 12098.954 - 12149.366: 97.4624% ( 5) 00:07:08.051 12149.366 - 12199.778: 97.4795% ( 3) 00:07:08.051 12199.778 - 12250.191: 97.4909% ( 2) 00:07:08.051 12250.191 - 12300.603: 97.5194% ( 5) 00:07:08.051 12300.603 - 12351.015: 97.5479% ( 5) 00:07:08.051 12351.015 - 12401.428: 97.5764% ( 5) 00:07:08.051 12401.428 - 12451.840: 97.6106% ( 6) 00:07:08.051 12451.840 - 12502.252: 97.6277% ( 3) 00:07:08.051 12502.252 - 12552.665: 97.6505% ( 4) 00:07:08.051 12552.665 - 12603.077: 97.6848% ( 6) 00:07:08.051 12603.077 - 12653.489: 97.7361% ( 9) 00:07:08.051 12653.489 - 12703.902: 97.7988% ( 11) 00:07:08.051 12703.902 - 12754.314: 97.8444% ( 8) 00:07:08.051 12754.314 - 12804.726: 97.9072% ( 11) 00:07:08.051 12804.726 - 12855.138: 98.0269% ( 21) 00:07:08.051 12855.138 - 12905.551: 98.1695% ( 25) 00:07:08.051 12905.551 - 13006.375: 98.3006% ( 23) 00:07:08.051 13006.375 - 13107.200: 98.4147% ( 20) 00:07:08.051 13107.200 - 13208.025: 98.4774% ( 11) 00:07:08.051 13208.025 - 13308.849: 98.5458% ( 12) 00:07:08.051 13308.849 - 13409.674: 98.6143% ( 12) 00:07:08.051 13409.674 - 13510.498: 98.6656% ( 9) 00:07:08.051 13510.498 - 13611.323: 98.7169% ( 9) 00:07:08.051 13611.323 - 13712.148: 98.8424% ( 22) 00:07:08.051 13712.148 - 13812.972: 98.9165% ( 13) 00:07:08.051 13812.972 - 13913.797: 98.9849% ( 12) 00:07:08.051 13913.797 - 14014.622: 99.0192% ( 6) 00:07:08.051 14014.622 - 14115.446: 99.0591% ( 7) 00:07:08.051 14115.446 - 14216.271: 99.1845% ( 22) 00:07:08.051 14216.271 - 14317.095: 99.2416% ( 10) 00:07:08.051 14317.095 - 14417.920: 99.2701% ( 5) 00:07:08.051 19459.151 - 19559.975: 99.2929% ( 4) 00:07:08.051 19559.975 - 19660.800: 99.3214% ( 5) 00:07:08.051 19660.800 - 19761.625: 99.3442% ( 4) 00:07:08.051 19761.625 - 19862.449: 99.3613% ( 3) 00:07:08.051 19862.449 - 19963.274: 99.3784% ( 3) 00:07:08.051 19963.274 - 20064.098: 99.3955% ( 3) 00:07:08.051 20064.098 - 20164.923: 99.4126% ( 3) 00:07:08.051 20164.923 - 20265.748: 99.4297% ( 3) 00:07:08.051 20265.748 - 20366.572: 99.4469% ( 3) 00:07:08.051 20366.572 - 20467.397: 99.4697% ( 4) 00:07:08.051 20467.397 - 20568.222: 99.4868% ( 3) 00:07:08.051 20568.222 - 20669.046: 99.5039% ( 3) 00:07:08.051 20669.046 - 20769.871: 99.5210% ( 3) 00:07:08.051 20769.871 - 20870.695: 99.5267% ( 1) 00:07:08.051 20870.695 - 20971.520: 99.5438% ( 3) 00:07:08.051 20971.520 - 21072.345: 99.5609% ( 3) 00:07:08.051 21072.345 - 21173.169: 99.5723% ( 2) 00:07:08.051 21173.169 - 21273.994: 99.5894% ( 3) 00:07:08.051 21273.994 - 21374.818: 99.6008% ( 2) 00:07:08.051 21374.818 - 21475.643: 99.6179% ( 3) 00:07:08.051 21475.643 - 21576.468: 99.6293% ( 2) 00:07:08.051 21576.468 - 21677.292: 99.6350% ( 1) 00:07:08.051 26012.751 - 26214.400: 99.6921% ( 10) 00:07:08.051 26214.400 - 26416.049: 99.7434% ( 9) 00:07:08.051 26416.049 - 26617.698: 99.7947% ( 9) 00:07:08.051 26617.698 - 26819.348: 99.9088% ( 20) 00:07:08.051 26819.348 - 27020.997: 99.9316% ( 4) 00:07:08.051 27424.295 - 27625.945: 99.9430% ( 2) 00:07:08.051 27625.945 - 27827.594: 99.9943% ( 9) 00:07:08.051 27827.594 - 28029.243: 100.0000% ( 1) 00:07:08.051 00:07:08.051 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:08.051 ============================================================================== 00:07:08.051 Range in us Cumulative IO count 00:07:08.051 5873.034 - 5898.240: 0.0057% ( 1) 00:07:08.051 5898.240 - 5923.446: 0.0171% ( 2) 00:07:08.051 5923.446 - 5948.652: 0.0285% ( 2) 00:07:08.051 5948.652 - 5973.858: 0.0399% ( 2) 00:07:08.051 5973.858 - 5999.065: 0.0684% ( 5) 00:07:08.051 5999.065 - 6024.271: 0.1141% ( 8) 00:07:08.051 6024.271 - 6049.477: 0.1711% ( 10) 00:07:08.051 6049.477 - 6074.683: 0.2680% ( 17) 00:07:08.051 6074.683 - 6099.889: 0.3992% ( 23) 00:07:08.051 6099.889 - 6125.095: 0.5189% ( 21) 00:07:08.051 6125.095 - 6150.302: 0.6501% ( 23) 00:07:08.051 6150.302 - 6175.508: 0.9580% ( 54) 00:07:08.051 6175.508 - 6200.714: 1.3401% ( 67) 00:07:08.051 6200.714 - 6225.920: 1.6423% ( 53) 00:07:08.051 6225.920 - 6251.126: 2.0871% ( 78) 00:07:08.051 6251.126 - 6276.332: 2.5433% ( 80) 00:07:08.051 6276.332 - 6301.538: 3.2048% ( 116) 00:07:08.051 6301.538 - 6326.745: 4.0830% ( 154) 00:07:08.051 6326.745 - 6351.951: 4.6818% ( 105) 00:07:08.051 6351.951 - 6377.157: 5.5315% ( 149) 00:07:08.051 6377.157 - 6402.363: 6.8031% ( 223) 00:07:08.051 6402.363 - 6427.569: 8.5139% ( 300) 00:07:08.051 6427.569 - 6452.775: 10.2361% ( 302) 00:07:08.051 6452.775 - 6503.188: 12.9049% ( 468) 00:07:08.051 6503.188 - 6553.600: 17.6323% ( 829) 00:07:08.051 6553.600 - 6604.012: 22.6106% ( 873) 00:07:08.051 6604.012 - 6654.425: 28.4500% ( 1024) 00:07:08.051 6654.425 - 6704.837: 36.7245% ( 1451) 00:07:08.051 6704.837 - 6755.249: 42.0734% ( 938) 00:07:08.051 6755.249 - 6805.662: 48.1980% ( 1074) 00:07:08.051 6805.662 - 6856.074: 53.6724% ( 960) 00:07:08.051 6856.074 - 6906.486: 58.2459% ( 802) 00:07:08.051 6906.486 - 6956.898: 62.8079% ( 800) 00:07:08.051 6956.898 - 7007.311: 66.8054% ( 701) 00:07:08.051 7007.311 - 7057.723: 69.0522% ( 394) 00:07:08.051 7057.723 - 7108.135: 71.2363% ( 383) 00:07:08.051 7108.135 - 7158.548: 72.6334% ( 245) 00:07:08.051 7158.548 - 7208.960: 74.1503% ( 266) 00:07:08.051 7208.960 - 7259.372: 75.1996% ( 184) 00:07:08.051 7259.372 - 7309.785: 76.5682% ( 240) 00:07:08.051 7309.785 - 7360.197: 78.0395% ( 258) 00:07:08.051 7360.197 - 7410.609: 78.7865% ( 131) 00:07:08.051 7410.609 - 7461.022: 79.3796% ( 104) 00:07:08.051 7461.022 - 7511.434: 80.1722% ( 139) 00:07:08.051 7511.434 - 7561.846: 81.0162% ( 148) 00:07:08.051 7561.846 - 7612.258: 81.7347% ( 126) 00:07:08.051 7612.258 - 7662.671: 82.3734% ( 112) 00:07:08.051 7662.671 - 7713.083: 82.8581% ( 85) 00:07:08.051 7713.083 - 7763.495: 83.7591% ( 158) 00:07:08.051 7763.495 - 7813.908: 84.3807% ( 109) 00:07:08.051 7813.908 - 7864.320: 84.7913% ( 72) 00:07:08.051 7864.320 - 7914.732: 85.3672% ( 101) 00:07:08.051 7914.732 - 7965.145: 85.8691% ( 88) 00:07:08.051 7965.145 - 8015.557: 86.2226% ( 62) 00:07:08.051 8015.557 - 8065.969: 86.7016% ( 84) 00:07:08.051 8065.969 - 8116.382: 87.3232% ( 109) 00:07:08.051 8116.382 - 8166.794: 87.7395% ( 73) 00:07:08.051 8166.794 - 8217.206: 88.1330% ( 69) 00:07:08.051 8217.206 - 8267.618: 88.5778% ( 78) 00:07:08.051 8267.618 - 8318.031: 89.0397% ( 81) 00:07:08.051 8318.031 - 8368.443: 89.3248% ( 50) 00:07:08.051 8368.443 - 8418.855: 89.6898% ( 64) 00:07:08.051 8418.855 - 8469.268: 89.9977% ( 54) 00:07:08.051 8469.268 - 8519.680: 90.6820% ( 120) 00:07:08.051 8519.680 - 8570.092: 91.2295% ( 96) 00:07:08.051 8570.092 - 8620.505: 91.4348% ( 36) 00:07:08.051 8620.505 - 8670.917: 91.6344% ( 35) 00:07:08.051 8670.917 - 8721.329: 91.8568% ( 39) 00:07:08.051 8721.329 - 8771.742: 92.1191% ( 46) 00:07:08.052 8771.742 - 8822.154: 92.3073% ( 33) 00:07:08.052 8822.154 - 8872.566: 92.4897% ( 32) 00:07:08.052 8872.566 - 8922.978: 92.6437% ( 27) 00:07:08.052 8922.978 - 8973.391: 92.9859% ( 60) 00:07:08.052 8973.391 - 9023.803: 93.1284% ( 25) 00:07:08.052 9023.803 - 9074.215: 93.2368% ( 19) 00:07:08.052 9074.215 - 9124.628: 93.4991% ( 46) 00:07:08.052 9124.628 - 9175.040: 93.6017% ( 18) 00:07:08.052 9175.040 - 9225.452: 93.6987% ( 17) 00:07:08.052 9225.452 - 9275.865: 93.7614% ( 11) 00:07:08.052 9275.865 - 9326.277: 93.8641% ( 18) 00:07:08.052 9326.277 - 9376.689: 94.0408% ( 31) 00:07:08.052 9376.689 - 9427.102: 94.1036% ( 11) 00:07:08.052 9427.102 - 9477.514: 94.1378% ( 6) 00:07:08.052 9477.514 - 9527.926: 94.1834% ( 8) 00:07:08.052 9527.926 - 9578.338: 94.2005% ( 3) 00:07:08.052 9578.338 - 9628.751: 94.2233% ( 4) 00:07:08.052 9628.751 - 9679.163: 94.2518% ( 5) 00:07:08.052 9679.163 - 9729.575: 94.2803% ( 5) 00:07:08.052 9729.575 - 9779.988: 94.3716% ( 16) 00:07:08.052 9779.988 - 9830.400: 94.4742% ( 18) 00:07:08.052 9830.400 - 9880.812: 94.6054% ( 23) 00:07:08.052 9880.812 - 9931.225: 94.7308% ( 22) 00:07:08.052 9931.225 - 9981.637: 94.8392% ( 19) 00:07:08.052 9981.637 - 10032.049: 94.9304% ( 16) 00:07:08.052 10032.049 - 10082.462: 95.0160% ( 15) 00:07:08.052 10082.462 - 10132.874: 95.1129% ( 17) 00:07:08.052 10132.874 - 10183.286: 95.2042% ( 16) 00:07:08.052 10183.286 - 10233.698: 95.3581% ( 27) 00:07:08.052 10233.698 - 10284.111: 95.6090% ( 44) 00:07:08.052 10284.111 - 10334.523: 95.7744% ( 29) 00:07:08.052 10334.523 - 10384.935: 95.8656% ( 16) 00:07:08.052 10384.935 - 10435.348: 95.9455% ( 14) 00:07:08.052 10435.348 - 10485.760: 96.0310% ( 15) 00:07:08.052 10485.760 - 10536.172: 96.1451% ( 20) 00:07:08.052 10536.172 - 10586.585: 96.2363% ( 16) 00:07:08.052 10586.585 - 10636.997: 96.3047% ( 12) 00:07:08.052 10636.997 - 10687.409: 96.3675% ( 11) 00:07:08.052 10687.409 - 10737.822: 96.4758% ( 19) 00:07:08.052 10737.822 - 10788.234: 96.5271% ( 9) 00:07:08.052 10788.234 - 10838.646: 96.5614% ( 6) 00:07:08.052 10838.646 - 10889.058: 96.5956% ( 6) 00:07:08.052 10889.058 - 10939.471: 96.6355% ( 7) 00:07:08.052 10939.471 - 10989.883: 96.6754% ( 7) 00:07:08.052 10989.883 - 11040.295: 96.7096% ( 6) 00:07:08.052 11040.295 - 11090.708: 96.7381% ( 5) 00:07:08.052 11090.708 - 11141.120: 96.7609% ( 4) 00:07:08.052 11141.120 - 11191.532: 96.7838% ( 4) 00:07:08.052 11191.532 - 11241.945: 96.8180% ( 6) 00:07:08.052 11241.945 - 11292.357: 96.8693% ( 9) 00:07:08.052 11292.357 - 11342.769: 96.9948% ( 22) 00:07:08.052 11342.769 - 11393.182: 97.1316% ( 24) 00:07:08.052 11393.182 - 11443.594: 97.1715% ( 7) 00:07:08.052 11443.594 - 11494.006: 97.1943% ( 4) 00:07:08.052 11494.006 - 11544.418: 97.2457% ( 9) 00:07:08.052 11544.418 - 11594.831: 97.3141% ( 12) 00:07:08.052 11594.831 - 11645.243: 97.3768% ( 11) 00:07:08.052 11645.243 - 11695.655: 97.4281% ( 9) 00:07:08.052 11695.655 - 11746.068: 97.4396% ( 2) 00:07:08.052 11746.068 - 11796.480: 97.4453% ( 1) 00:07:08.052 12351.015 - 12401.428: 97.4624% ( 3) 00:07:08.052 12401.428 - 12451.840: 97.4681% ( 1) 00:07:08.052 12451.840 - 12502.252: 97.4738% ( 1) 00:07:08.052 12502.252 - 12552.665: 97.4909% ( 3) 00:07:08.052 12552.665 - 12603.077: 97.5023% ( 2) 00:07:08.052 12603.077 - 12653.489: 97.5137% ( 2) 00:07:08.052 12653.489 - 12703.902: 97.5593% ( 8) 00:07:08.052 12703.902 - 12754.314: 97.6448% ( 15) 00:07:08.052 12754.314 - 12804.726: 97.7418% ( 17) 00:07:08.052 12804.726 - 12855.138: 97.7874% ( 8) 00:07:08.052 12855.138 - 12905.551: 97.8216% ( 6) 00:07:08.052 12905.551 - 13006.375: 97.9129% ( 16) 00:07:08.052 13006.375 - 13107.200: 98.0611% ( 26) 00:07:08.052 13107.200 - 13208.025: 98.2037% ( 25) 00:07:08.052 13208.025 - 13308.849: 98.2892% ( 15) 00:07:08.052 13308.849 - 13409.674: 98.3748% ( 15) 00:07:08.052 13409.674 - 13510.498: 98.4945% ( 21) 00:07:08.052 13510.498 - 13611.323: 98.5858% ( 16) 00:07:08.052 13611.323 - 13712.148: 98.6371% ( 9) 00:07:08.052 13712.148 - 13812.972: 98.6941% ( 10) 00:07:08.052 13812.972 - 13913.797: 98.8481% ( 27) 00:07:08.052 13913.797 - 14014.622: 98.9507% ( 18) 00:07:08.052 14014.622 - 14115.446: 99.0135% ( 11) 00:07:08.052 14115.446 - 14216.271: 99.0534% ( 7) 00:07:08.052 14216.271 - 14317.095: 99.0876% ( 6) 00:07:08.052 14317.095 - 14417.920: 99.2073% ( 21) 00:07:08.052 14417.920 - 14518.745: 99.2359% ( 5) 00:07:08.052 14518.745 - 14619.569: 99.2644% ( 5) 00:07:08.052 14619.569 - 14720.394: 99.2701% ( 1) 00:07:08.052 18450.905 - 18551.729: 99.2758% ( 1) 00:07:08.052 18753.378 - 18854.203: 99.2815% ( 1) 00:07:08.052 19257.502 - 19358.326: 99.2986% ( 3) 00:07:08.052 19358.326 - 19459.151: 99.3100% ( 2) 00:07:08.052 19459.151 - 19559.975: 99.3328% ( 4) 00:07:08.052 19559.975 - 19660.800: 99.3442% ( 2) 00:07:08.052 19660.800 - 19761.625: 99.3670% ( 4) 00:07:08.052 19761.625 - 19862.449: 99.3784% ( 2) 00:07:08.052 19862.449 - 19963.274: 99.3955% ( 3) 00:07:08.052 19963.274 - 20064.098: 99.4297% ( 6) 00:07:08.052 20064.098 - 20164.923: 99.4868% ( 10) 00:07:08.052 20164.923 - 20265.748: 99.5039% ( 3) 00:07:08.052 20265.748 - 20366.572: 99.5267% ( 4) 00:07:08.052 20366.572 - 20467.397: 99.5495% ( 4) 00:07:08.052 20467.397 - 20568.222: 99.5666% ( 3) 00:07:08.052 20568.222 - 20669.046: 99.5894% ( 4) 00:07:08.052 20669.046 - 20769.871: 99.6122% ( 4) 00:07:08.052 20769.871 - 20870.695: 99.6350% ( 4) 00:07:08.052 24601.206 - 24702.031: 99.6578% ( 4) 00:07:08.052 24702.031 - 24802.855: 99.6864% ( 5) 00:07:08.052 24802.855 - 24903.680: 99.7092% ( 4) 00:07:08.052 24903.680 - 25004.505: 99.7377% ( 5) 00:07:08.052 25004.505 - 25105.329: 99.7662% ( 5) 00:07:08.052 25105.329 - 25206.154: 99.8061% ( 7) 00:07:08.052 25206.154 - 25306.978: 99.8917% ( 15) 00:07:08.052 25306.978 - 25407.803: 99.9772% ( 15) 00:07:08.052 25407.803 - 25508.628: 100.0000% ( 4) 00:07:08.052 00:07:08.052 10:35:33 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:08.052 00:07:08.052 real 0m2.484s 00:07:08.052 user 0m2.194s 00:07:08.052 sys 0m0.193s 00:07:08.052 10:35:33 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.052 10:35:33 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.052 ************************************ 00:07:08.052 END TEST nvme_perf 00:07:08.052 ************************************ 00:07:08.052 10:35:33 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:08.052 10:35:33 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.052 10:35:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.052 10:35:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.052 ************************************ 00:07:08.052 START TEST nvme_hello_world 00:07:08.052 ************************************ 00:07:08.052 10:35:33 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:08.052 Initializing NVMe Controllers 00:07:08.052 Attached to 0000:00:10.0 00:07:08.052 Namespace ID: 1 size: 6GB 00:07:08.052 Attached to 0000:00:11.0 00:07:08.052 Namespace ID: 1 size: 5GB 00:07:08.052 Attached to 0000:00:13.0 00:07:08.052 Namespace ID: 1 size: 1GB 00:07:08.052 Attached to 0000:00:12.0 00:07:08.052 Namespace ID: 1 size: 4GB 00:07:08.052 Namespace ID: 2 size: 4GB 00:07:08.052 Namespace ID: 3 size: 4GB 00:07:08.052 Initialization complete. 00:07:08.052 INFO: using host memory buffer for IO 00:07:08.052 Hello world! 00:07:08.052 INFO: using host memory buffer for IO 00:07:08.052 Hello world! 00:07:08.052 INFO: using host memory buffer for IO 00:07:08.052 Hello world! 00:07:08.052 INFO: using host memory buffer for IO 00:07:08.052 Hello world! 00:07:08.053 INFO: using host memory buffer for IO 00:07:08.053 Hello world! 00:07:08.053 INFO: using host memory buffer for IO 00:07:08.053 Hello world! 00:07:08.053 ************************************ 00:07:08.053 END TEST nvme_hello_world 00:07:08.053 ************************************ 00:07:08.053 00:07:08.053 real 0m0.211s 00:07:08.053 user 0m0.072s 00:07:08.053 sys 0m0.094s 00:07:08.053 10:35:33 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.053 10:35:33 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:08.053 10:35:33 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:08.053 10:35:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.053 10:35:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.053 10:35:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.053 ************************************ 00:07:08.053 START TEST nvme_sgl 00:07:08.053 ************************************ 00:07:08.053 10:35:33 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:08.346 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:08.346 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:08.346 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:08.346 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:08.346 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:08.346 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:08.346 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:08.346 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:08.346 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:08.346 NVMe Readv/Writev Request test 00:07:08.346 Attached to 0000:00:10.0 00:07:08.346 Attached to 0000:00:11.0 00:07:08.346 Attached to 0000:00:13.0 00:07:08.346 Attached to 0000:00:12.0 00:07:08.346 0000:00:10.0: build_io_request_2 test passed 00:07:08.346 0000:00:10.0: build_io_request_4 test passed 00:07:08.346 0000:00:10.0: build_io_request_5 test passed 00:07:08.346 0000:00:10.0: build_io_request_6 test passed 00:07:08.346 0000:00:10.0: build_io_request_7 test passed 00:07:08.346 0000:00:10.0: build_io_request_10 test passed 00:07:08.346 0000:00:11.0: build_io_request_2 test passed 00:07:08.346 0000:00:11.0: build_io_request_4 test passed 00:07:08.346 0000:00:11.0: build_io_request_5 test passed 00:07:08.346 0000:00:11.0: build_io_request_6 test passed 00:07:08.346 0000:00:11.0: build_io_request_7 test passed 00:07:08.346 0000:00:11.0: build_io_request_10 test passed 00:07:08.346 Cleaning up... 00:07:08.346 00:07:08.346 real 0m0.278s 00:07:08.346 user 0m0.145s 00:07:08.346 sys 0m0.089s 00:07:08.346 10:35:34 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.346 10:35:34 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 ************************************ 00:07:08.346 END TEST nvme_sgl 00:07:08.346 ************************************ 00:07:08.346 10:35:34 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:08.346 10:35:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.346 10:35:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.346 10:35:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.346 ************************************ 00:07:08.346 START TEST nvme_e2edp 00:07:08.346 ************************************ 00:07:08.346 10:35:34 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:08.605 NVMe Write/Read with End-to-End data protection test 00:07:08.605 Attached to 0000:00:10.0 00:07:08.605 Attached to 0000:00:11.0 00:07:08.605 Attached to 0000:00:13.0 00:07:08.605 Attached to 0000:00:12.0 00:07:08.605 Cleaning up... 00:07:08.605 00:07:08.605 real 0m0.203s 00:07:08.605 user 0m0.067s 00:07:08.605 sys 0m0.095s 00:07:08.605 10:35:34 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.605 10:35:34 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:08.605 ************************************ 00:07:08.605 END TEST nvme_e2edp 00:07:08.605 ************************************ 00:07:08.605 10:35:34 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:08.605 10:35:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.605 10:35:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.605 10:35:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.605 ************************************ 00:07:08.605 START TEST nvme_reserve 00:07:08.605 ************************************ 00:07:08.605 10:35:34 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:08.863 ===================================================== 00:07:08.863 NVMe Controller at PCI bus 0, device 16, function 0 00:07:08.863 ===================================================== 00:07:08.863 Reservations: Not Supported 00:07:08.863 ===================================================== 00:07:08.863 NVMe Controller at PCI bus 0, device 17, function 0 00:07:08.863 ===================================================== 00:07:08.863 Reservations: Not Supported 00:07:08.863 ===================================================== 00:07:08.863 NVMe Controller at PCI bus 0, device 19, function 0 00:07:08.863 ===================================================== 00:07:08.863 Reservations: Not Supported 00:07:08.863 ===================================================== 00:07:08.863 NVMe Controller at PCI bus 0, device 18, function 0 00:07:08.863 ===================================================== 00:07:08.863 Reservations: Not Supported 00:07:08.863 Reservation test passed 00:07:08.863 ************************************ 00:07:08.863 END TEST nvme_reserve 00:07:08.863 ************************************ 00:07:08.863 00:07:08.863 real 0m0.216s 00:07:08.863 user 0m0.072s 00:07:08.863 sys 0m0.097s 00:07:08.863 10:35:34 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.863 10:35:34 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:08.863 10:35:34 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:08.863 10:35:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.863 10:35:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.863 10:35:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.863 ************************************ 00:07:08.863 START TEST nvme_err_injection 00:07:08.863 ************************************ 00:07:08.863 10:35:34 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:09.121 NVMe Error Injection test 00:07:09.121 Attached to 0000:00:10.0 00:07:09.121 Attached to 0000:00:11.0 00:07:09.121 Attached to 0000:00:13.0 00:07:09.121 Attached to 0000:00:12.0 00:07:09.121 0000:00:11.0: get features failed as expected 00:07:09.121 0000:00:13.0: get features failed as expected 00:07:09.121 0000:00:12.0: get features failed as expected 00:07:09.121 0000:00:10.0: get features failed as expected 00:07:09.121 0000:00:10.0: get features successfully as expected 00:07:09.121 0000:00:11.0: get features successfully as expected 00:07:09.121 0000:00:13.0: get features successfully as expected 00:07:09.121 0000:00:12.0: get features successfully as expected 00:07:09.121 0000:00:10.0: read failed as expected 00:07:09.121 0000:00:11.0: read failed as expected 00:07:09.121 0000:00:13.0: read failed as expected 00:07:09.121 0000:00:12.0: read failed as expected 00:07:09.121 0000:00:10.0: read successfully as expected 00:07:09.121 0000:00:11.0: read successfully as expected 00:07:09.121 0000:00:13.0: read successfully as expected 00:07:09.121 0000:00:12.0: read successfully as expected 00:07:09.121 Cleaning up... 00:07:09.121 ************************************ 00:07:09.121 END TEST nvme_err_injection 00:07:09.121 ************************************ 00:07:09.121 00:07:09.121 real 0m0.219s 00:07:09.121 user 0m0.082s 00:07:09.121 sys 0m0.095s 00:07:09.121 10:35:34 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.121 10:35:34 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:09.121 10:35:34 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:09.121 10:35:34 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:09.121 10:35:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.121 10:35:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.121 ************************************ 00:07:09.121 START TEST nvme_overhead 00:07:09.121 ************************************ 00:07:09.121 10:35:34 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:10.495 Initializing NVMe Controllers 00:07:10.495 Attached to 0000:00:10.0 00:07:10.495 Attached to 0000:00:11.0 00:07:10.495 Attached to 0000:00:13.0 00:07:10.495 Attached to 0000:00:12.0 00:07:10.495 Initialization complete. Launching workers. 00:07:10.495 submit (in ns) avg, min, max = 11344.0, 10130.8, 241203.1 00:07:10.496 complete (in ns) avg, min, max = 7695.7, 7376.2, 113215.4 00:07:10.496 00:07:10.496 Submit histogram 00:07:10.496 ================ 00:07:10.496 Range in us Cumulative Count 00:07:10.496 10.092 - 10.142: 0.0055% ( 1) 00:07:10.496 10.683 - 10.732: 0.0111% ( 1) 00:07:10.496 10.782 - 10.831: 0.0222% ( 2) 00:07:10.496 10.831 - 10.880: 0.0609% ( 7) 00:07:10.496 10.880 - 10.929: 0.4044% ( 62) 00:07:10.496 10.929 - 10.978: 2.4211% ( 364) 00:07:10.496 10.978 - 11.028: 11.1025% ( 1567) 00:07:10.496 11.028 - 11.077: 30.3657% ( 3477) 00:07:10.496 11.077 - 11.126: 53.4183% ( 4161) 00:07:10.496 11.126 - 11.175: 70.0997% ( 3011) 00:07:10.496 11.175 - 11.225: 79.3296% ( 1666) 00:07:10.496 11.225 - 11.274: 84.1607% ( 872) 00:07:10.496 11.274 - 11.323: 86.8089% ( 478) 00:07:10.496 11.323 - 11.372: 88.3823% ( 284) 00:07:10.496 11.372 - 11.422: 89.2687% ( 160) 00:07:10.496 11.422 - 11.471: 89.8227% ( 100) 00:07:10.496 11.471 - 11.520: 90.2438% ( 76) 00:07:10.496 11.520 - 11.569: 90.6593% ( 75) 00:07:10.496 11.569 - 11.618: 91.0139% ( 64) 00:07:10.496 11.618 - 11.668: 91.3906% ( 68) 00:07:10.496 11.668 - 11.717: 91.8172% ( 77) 00:07:10.496 11.717 - 11.766: 92.2105% ( 71) 00:07:10.496 11.766 - 11.815: 92.5485% ( 61) 00:07:10.496 11.815 - 11.865: 92.8753% ( 59) 00:07:10.496 11.865 - 11.914: 93.2299% ( 64) 00:07:10.496 11.914 - 11.963: 93.5512% ( 58) 00:07:10.496 11.963 - 12.012: 93.8670% ( 57) 00:07:10.496 12.012 - 12.062: 94.3989% ( 96) 00:07:10.496 12.062 - 12.111: 94.9307% ( 96) 00:07:10.496 12.111 - 12.160: 95.4349% ( 91) 00:07:10.496 12.160 - 12.209: 95.9169% ( 87) 00:07:10.496 12.209 - 12.258: 96.3047% ( 70) 00:07:10.496 12.258 - 12.308: 96.5596% ( 46) 00:07:10.496 12.308 - 12.357: 96.6925% ( 24) 00:07:10.496 12.357 - 12.406: 96.8310% ( 25) 00:07:10.496 12.406 - 12.455: 96.8975% ( 12) 00:07:10.496 12.455 - 12.505: 96.9363% ( 7) 00:07:10.496 12.505 - 12.554: 96.9751% ( 7) 00:07:10.496 12.554 - 12.603: 97.0139% ( 7) 00:07:10.496 12.603 - 12.702: 97.0526% ( 7) 00:07:10.496 12.702 - 12.800: 97.0970% ( 8) 00:07:10.496 12.800 - 12.898: 97.1413% ( 8) 00:07:10.496 12.898 - 12.997: 97.2022% ( 11) 00:07:10.496 12.997 - 13.095: 97.2742% ( 13) 00:07:10.496 13.095 - 13.194: 97.3407% ( 12) 00:07:10.496 13.194 - 13.292: 97.5014% ( 29) 00:07:10.496 13.292 - 13.391: 97.6510% ( 27) 00:07:10.496 13.391 - 13.489: 97.7396% ( 16) 00:07:10.496 13.489 - 13.588: 97.7950% ( 10) 00:07:10.496 13.588 - 13.686: 97.8504% ( 10) 00:07:10.496 13.686 - 13.785: 97.9280% ( 14) 00:07:10.496 13.785 - 13.883: 97.9501% ( 4) 00:07:10.496 13.883 - 13.982: 97.9612% ( 2) 00:07:10.496 13.982 - 14.080: 97.9723% ( 2) 00:07:10.496 14.080 - 14.178: 97.9945% ( 4) 00:07:10.496 14.178 - 14.277: 98.0443% ( 9) 00:07:10.496 14.277 - 14.375: 98.0609% ( 3) 00:07:10.496 14.375 - 14.474: 98.0831% ( 4) 00:07:10.496 14.474 - 14.572: 98.0997% ( 3) 00:07:10.496 14.572 - 14.671: 98.1163% ( 3) 00:07:10.496 14.671 - 14.769: 98.1440% ( 5) 00:07:10.496 14.769 - 14.868: 98.1884% ( 8) 00:07:10.496 14.868 - 14.966: 98.2548% ( 12) 00:07:10.496 14.966 - 15.065: 98.2881% ( 6) 00:07:10.496 15.065 - 15.163: 98.2992% ( 2) 00:07:10.496 15.163 - 15.262: 98.3102% ( 2) 00:07:10.496 15.262 - 15.360: 98.3324% ( 4) 00:07:10.496 15.360 - 15.458: 98.3490% ( 3) 00:07:10.496 15.458 - 15.557: 98.3823% ( 6) 00:07:10.496 15.557 - 15.655: 98.3989% ( 3) 00:07:10.496 15.655 - 15.754: 98.4100% ( 2) 00:07:10.496 15.754 - 15.852: 98.4266% ( 3) 00:07:10.496 15.852 - 15.951: 98.4321% ( 1) 00:07:10.496 15.951 - 16.049: 98.4377% ( 1) 00:07:10.496 16.049 - 16.148: 98.4488% ( 2) 00:07:10.496 16.148 - 16.246: 98.4598% ( 2) 00:07:10.496 16.246 - 16.345: 98.4765% ( 3) 00:07:10.496 16.345 - 16.443: 98.5097% ( 6) 00:07:10.496 16.443 - 16.542: 98.5319% ( 4) 00:07:10.496 16.542 - 16.640: 98.6482% ( 21) 00:07:10.496 16.640 - 16.738: 98.7258% ( 14) 00:07:10.496 16.738 - 16.837: 98.8199% ( 17) 00:07:10.496 16.837 - 16.935: 98.9252% ( 19) 00:07:10.496 16.935 - 17.034: 98.9806% ( 10) 00:07:10.496 17.034 - 17.132: 99.0582% ( 14) 00:07:10.496 17.132 - 17.231: 99.1468% ( 16) 00:07:10.496 17.231 - 17.329: 99.2632% ( 21) 00:07:10.496 17.329 - 17.428: 99.3573% ( 17) 00:07:10.496 17.428 - 17.526: 99.4349% ( 14) 00:07:10.496 17.526 - 17.625: 99.4792% ( 8) 00:07:10.496 17.625 - 17.723: 99.5623% ( 15) 00:07:10.496 17.723 - 17.822: 99.5789% ( 3) 00:07:10.496 17.822 - 17.920: 99.6177% ( 7) 00:07:10.496 17.920 - 18.018: 99.6454% ( 5) 00:07:10.496 18.018 - 18.117: 99.6620% ( 3) 00:07:10.496 18.117 - 18.215: 99.6676% ( 1) 00:07:10.496 18.215 - 18.314: 99.6898% ( 4) 00:07:10.496 18.314 - 18.412: 99.7064% ( 3) 00:07:10.496 18.412 - 18.511: 99.7285% ( 4) 00:07:10.496 18.511 - 18.609: 99.7341% ( 1) 00:07:10.496 18.609 - 18.708: 99.7396% ( 1) 00:07:10.496 18.708 - 18.806: 99.7507% ( 2) 00:07:10.496 18.905 - 19.003: 99.7562% ( 1) 00:07:10.496 19.003 - 19.102: 99.7673% ( 2) 00:07:10.496 19.102 - 19.200: 99.7729% ( 1) 00:07:10.496 19.200 - 19.298: 99.7784% ( 1) 00:07:10.496 19.594 - 19.692: 99.7839% ( 1) 00:07:10.496 19.692 - 19.791: 99.7895% ( 1) 00:07:10.496 19.791 - 19.889: 99.8006% ( 2) 00:07:10.496 19.889 - 19.988: 99.8061% ( 1) 00:07:10.496 19.988 - 20.086: 99.8116% ( 1) 00:07:10.496 20.185 - 20.283: 99.8172% ( 1) 00:07:10.496 20.480 - 20.578: 99.8227% ( 1) 00:07:10.496 20.578 - 20.677: 99.8338% ( 2) 00:07:10.496 20.775 - 20.874: 99.8393% ( 1) 00:07:10.496 20.874 - 20.972: 99.8504% ( 2) 00:07:10.496 21.071 - 21.169: 99.8560% ( 1) 00:07:10.496 21.563 - 21.662: 99.8615% ( 1) 00:07:10.496 21.858 - 21.957: 99.8670% ( 1) 00:07:10.496 22.055 - 22.154: 99.8781% ( 2) 00:07:10.496 22.154 - 22.252: 99.8947% ( 3) 00:07:10.496 22.252 - 22.351: 99.9003% ( 1) 00:07:10.496 22.449 - 22.548: 99.9058% ( 1) 00:07:10.496 22.548 - 22.646: 99.9114% ( 1) 00:07:10.496 22.745 - 22.843: 99.9169% ( 1) 00:07:10.496 22.843 - 22.942: 99.9224% ( 1) 00:07:10.496 23.237 - 23.335: 99.9280% ( 1) 00:07:10.496 24.615 - 24.714: 99.9335% ( 1) 00:07:10.496 25.403 - 25.600: 99.9391% ( 1) 00:07:10.496 28.554 - 28.751: 99.9501% ( 2) 00:07:10.496 36.825 - 37.022: 99.9557% ( 1) 00:07:10.496 38.006 - 38.203: 99.9612% ( 1) 00:07:10.496 40.763 - 40.960: 99.9668% ( 1) 00:07:10.496 41.354 - 41.551: 99.9723% ( 1) 00:07:10.496 42.732 - 42.929: 99.9778% ( 1) 00:07:10.496 46.474 - 46.671: 99.9834% ( 1) 00:07:10.496 53.563 - 53.957: 99.9889% ( 1) 00:07:10.496 54.351 - 54.745: 99.9945% ( 1) 00:07:10.496 241.034 - 242.609: 100.0000% ( 1) 00:07:10.496 00:07:10.496 Complete histogram 00:07:10.496 ================== 00:07:10.496 Range in us Cumulative Count 00:07:10.496 7.335 - 7.385: 0.0222% ( 4) 00:07:10.496 7.385 - 7.434: 1.6620% ( 296) 00:07:10.496 7.434 - 7.483: 12.4709% ( 1951) 00:07:10.496 7.483 - 7.532: 33.7839% ( 3847) 00:07:10.496 7.532 - 7.582: 58.7535% ( 4507) 00:07:10.496 7.582 - 7.631: 78.4321% ( 3552) 00:07:10.496 7.631 - 7.680: 88.2382% ( 1770) 00:07:10.496 7.680 - 7.729: 92.5928% ( 786) 00:07:10.496 7.729 - 7.778: 94.7479% ( 389) 00:07:10.496 7.778 - 7.828: 95.7839% ( 187) 00:07:10.496 7.828 - 7.877: 96.2604% ( 86) 00:07:10.496 7.877 - 7.926: 96.5762% ( 57) 00:07:10.496 7.926 - 7.975: 96.7202% ( 26) 00:07:10.496 7.975 - 8.025: 96.8199% ( 18) 00:07:10.496 8.025 - 8.074: 96.8532% ( 6) 00:07:10.496 8.074 - 8.123: 96.9141% ( 11) 00:07:10.496 8.123 - 8.172: 96.9972% ( 15) 00:07:10.496 8.172 - 8.222: 97.2078% ( 38) 00:07:10.496 8.222 - 8.271: 97.5291% ( 58) 00:07:10.496 8.271 - 8.320: 97.8006% ( 49) 00:07:10.496 8.320 - 8.369: 98.0388% ( 43) 00:07:10.496 8.369 - 8.418: 98.1607% ( 22) 00:07:10.496 8.418 - 8.468: 98.2438% ( 15) 00:07:10.496 8.468 - 8.517: 98.2659% ( 4) 00:07:10.496 8.517 - 8.566: 98.2770% ( 2) 00:07:10.496 8.566 - 8.615: 98.3102% ( 6) 00:07:10.496 8.615 - 8.665: 98.3213% ( 2) 00:07:10.496 8.665 - 8.714: 98.3324% ( 2) 00:07:10.496 8.714 - 8.763: 98.3435% ( 2) 00:07:10.496 8.862 - 8.911: 98.3490% ( 1) 00:07:10.496 9.058 - 9.108: 98.3601% ( 2) 00:07:10.496 9.354 - 9.403: 98.3657% ( 1) 00:07:10.496 9.403 - 9.452: 98.3712% ( 1) 00:07:10.496 9.502 - 9.551: 98.3767% ( 1) 00:07:10.496 9.551 - 9.600: 98.3823% ( 1) 00:07:10.496 9.600 - 9.649: 98.3878% ( 1) 00:07:10.497 9.698 - 9.748: 98.3934% ( 1) 00:07:10.497 9.748 - 9.797: 98.4044% ( 2) 00:07:10.497 9.797 - 9.846: 98.4155% ( 2) 00:07:10.497 9.895 - 9.945: 98.4377% ( 4) 00:07:10.497 9.945 - 9.994: 98.4488% ( 2) 00:07:10.497 10.043 - 10.092: 98.4543% ( 1) 00:07:10.497 10.092 - 10.142: 98.4709% ( 3) 00:07:10.497 10.142 - 10.191: 98.4875% ( 3) 00:07:10.497 10.191 - 10.240: 98.4931% ( 1) 00:07:10.497 10.240 - 10.289: 98.5042% ( 2) 00:07:10.497 10.289 - 10.338: 98.5208% ( 3) 00:07:10.497 10.338 - 10.388: 98.5319% ( 2) 00:07:10.497 10.486 - 10.535: 98.5374% ( 1) 00:07:10.497 10.535 - 10.585: 98.5485% ( 2) 00:07:10.497 10.585 - 10.634: 98.5540% ( 1) 00:07:10.497 10.634 - 10.683: 98.5596% ( 1) 00:07:10.497 10.683 - 10.732: 98.5651% ( 1) 00:07:10.497 11.225 - 11.274: 98.5706% ( 1) 00:07:10.497 11.274 - 11.323: 98.5762% ( 1) 00:07:10.497 11.569 - 11.618: 98.5817% ( 1) 00:07:10.497 12.455 - 12.505: 98.5873% ( 1) 00:07:10.497 12.603 - 12.702: 98.5928% ( 1) 00:07:10.497 12.702 - 12.800: 98.5983% ( 1) 00:07:10.497 12.800 - 12.898: 98.6205% ( 4) 00:07:10.497 12.898 - 12.997: 98.6759% ( 10) 00:07:10.497 12.997 - 13.095: 98.7202% ( 8) 00:07:10.497 13.095 - 13.194: 98.8144% ( 17) 00:07:10.497 13.194 - 13.292: 98.8643% ( 9) 00:07:10.497 13.292 - 13.391: 98.9307% ( 12) 00:07:10.497 13.391 - 13.489: 99.0028% ( 13) 00:07:10.497 13.489 - 13.588: 99.0803% ( 14) 00:07:10.497 13.588 - 13.686: 99.1856% ( 19) 00:07:10.497 13.686 - 13.785: 99.2521% ( 12) 00:07:10.497 13.785 - 13.883: 99.3241% ( 13) 00:07:10.497 13.883 - 13.982: 99.4404% ( 21) 00:07:10.497 13.982 - 14.080: 99.5069% ( 12) 00:07:10.497 14.080 - 14.178: 99.5512% ( 8) 00:07:10.497 14.178 - 14.277: 99.5900% ( 7) 00:07:10.497 14.277 - 14.375: 99.6233% ( 6) 00:07:10.497 14.375 - 14.474: 99.6731% ( 9) 00:07:10.497 14.474 - 14.572: 99.6953% ( 4) 00:07:10.497 14.572 - 14.671: 99.7064% ( 2) 00:07:10.497 14.671 - 14.769: 99.7285% ( 4) 00:07:10.497 14.769 - 14.868: 99.7452% ( 3) 00:07:10.497 14.868 - 14.966: 99.7507% ( 1) 00:07:10.497 14.966 - 15.065: 99.7784% ( 5) 00:07:10.497 15.065 - 15.163: 99.7895% ( 2) 00:07:10.497 15.360 - 15.458: 99.7950% ( 1) 00:07:10.497 16.049 - 16.148: 99.8006% ( 1) 00:07:10.497 16.148 - 16.246: 99.8116% ( 2) 00:07:10.497 16.246 - 16.345: 99.8172% ( 1) 00:07:10.497 16.345 - 16.443: 99.8283% ( 2) 00:07:10.497 16.542 - 16.640: 99.8338% ( 1) 00:07:10.497 16.640 - 16.738: 99.8393% ( 1) 00:07:10.497 16.738 - 16.837: 99.8449% ( 1) 00:07:10.497 16.935 - 17.034: 99.8504% ( 1) 00:07:10.497 17.034 - 17.132: 99.8560% ( 1) 00:07:10.497 17.428 - 17.526: 99.8670% ( 2) 00:07:10.497 17.920 - 18.018: 99.8781% ( 2) 00:07:10.497 18.314 - 18.412: 99.8837% ( 1) 00:07:10.497 18.412 - 18.511: 99.8892% ( 1) 00:07:10.497 18.609 - 18.708: 99.8947% ( 1) 00:07:10.497 18.905 - 19.003: 99.9003% ( 1) 00:07:10.497 19.200 - 19.298: 99.9058% ( 1) 00:07:10.497 19.397 - 19.495: 99.9114% ( 1) 00:07:10.497 19.692 - 19.791: 99.9169% ( 1) 00:07:10.497 20.578 - 20.677: 99.9224% ( 1) 00:07:10.497 20.972 - 21.071: 99.9280% ( 1) 00:07:10.497 21.366 - 21.465: 99.9335% ( 1) 00:07:10.497 21.760 - 21.858: 99.9391% ( 1) 00:07:10.497 22.449 - 22.548: 99.9501% ( 2) 00:07:10.497 23.631 - 23.729: 99.9557% ( 1) 00:07:10.497 25.108 - 25.206: 99.9612% ( 1) 00:07:10.497 32.295 - 32.492: 99.9668% ( 1) 00:07:10.497 32.492 - 32.689: 99.9723% ( 1) 00:07:10.497 33.871 - 34.068: 99.9778% ( 1) 00:07:10.497 35.446 - 35.643: 99.9834% ( 1) 00:07:10.497 39.975 - 40.172: 99.9889% ( 1) 00:07:10.497 45.489 - 45.686: 99.9945% ( 1) 00:07:10.497 112.640 - 113.428: 100.0000% ( 1) 00:07:10.497 00:07:10.497 00:07:10.497 real 0m1.226s 00:07:10.497 user 0m1.071s 00:07:10.497 sys 0m0.103s 00:07:10.497 10:35:36 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.497 10:35:36 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:10.497 ************************************ 00:07:10.497 END TEST nvme_overhead 00:07:10.497 ************************************ 00:07:10.497 10:35:36 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:10.497 10:35:36 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:10.497 10:35:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.497 10:35:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.497 ************************************ 00:07:10.497 START TEST nvme_arbitration 00:07:10.497 ************************************ 00:07:10.497 10:35:36 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:13.785 Initializing NVMe Controllers 00:07:13.785 Attached to 0000:00:10.0 00:07:13.785 Attached to 0000:00:11.0 00:07:13.785 Attached to 0000:00:13.0 00:07:13.785 Attached to 0000:00:12.0 00:07:13.785 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:13.785 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:13.785 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:13.785 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:13.785 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:13.785 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:13.785 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:13.785 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:13.785 Initialization complete. Launching workers. 00:07:13.785 Starting thread on core 1 with urgent priority queue 00:07:13.785 Starting thread on core 2 with urgent priority queue 00:07:13.785 Starting thread on core 3 with urgent priority queue 00:07:13.785 Starting thread on core 0 with urgent priority queue 00:07:13.785 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:13.785 ======================================================== 00:07:13.785 00:07:13.785 ************************************ 00:07:13.785 END TEST nvme_arbitration 00:07:13.785 ************************************ 00:07:13.785 00:07:13.785 real 0m3.306s 00:07:13.785 user 0m9.227s 00:07:13.785 sys 0m0.113s 00:07:13.785 10:35:39 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.785 10:35:39 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:13.785 10:35:39 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:13.785 10:35:39 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.785 10:35:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.785 10:35:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.785 ************************************ 00:07:13.785 START TEST nvme_single_aen 00:07:13.785 ************************************ 00:07:13.785 10:35:39 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:14.046 Asynchronous Event Request test 00:07:14.046 Attached to 0000:00:10.0 00:07:14.046 Attached to 0000:00:11.0 00:07:14.046 Attached to 0000:00:13.0 00:07:14.046 Attached to 0000:00:12.0 00:07:14.046 Reset controller to setup AER completions for this process 00:07:14.046 Registering asynchronous event callbacks... 00:07:14.046 Getting orig temperature thresholds of all controllers 00:07:14.046 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:14.046 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:14.046 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:14.046 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:14.046 Setting all controllers temperature threshold low to trigger AER 00:07:14.046 Waiting for all controllers temperature threshold to be set lower 00:07:14.046 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:14.046 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:14.046 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:14.046 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:14.046 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:14.046 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:14.046 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:14.046 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:14.046 Waiting for all controllers to trigger AER and reset threshold 00:07:14.046 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.046 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.046 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.046 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.046 Cleaning up... 00:07:14.046 ************************************ 00:07:14.046 END TEST nvme_single_aen 00:07:14.046 ************************************ 00:07:14.046 00:07:14.046 real 0m0.222s 00:07:14.046 user 0m0.080s 00:07:14.046 sys 0m0.097s 00:07:14.046 10:35:39 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.046 10:35:39 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 10:35:39 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:14.046 10:35:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.046 10:35:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.046 10:35:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 ************************************ 00:07:14.046 START TEST nvme_doorbell_aers 00:07:14.046 ************************************ 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:14.046 10:35:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:14.308 [2024-11-18 10:35:40.070425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:24.313 Executing: test_write_invalid_db 00:07:24.313 Waiting for AER completion... 00:07:24.313 Failure: test_write_invalid_db 00:07:24.313 00:07:24.313 Executing: test_invalid_db_write_overflow_sq 00:07:24.313 Waiting for AER completion... 00:07:24.313 Failure: test_invalid_db_write_overflow_sq 00:07:24.313 00:07:24.313 Executing: test_invalid_db_write_overflow_cq 00:07:24.313 Waiting for AER completion... 00:07:24.313 Failure: test_invalid_db_write_overflow_cq 00:07:24.313 00:07:24.313 10:35:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:24.313 10:35:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:24.313 [2024-11-18 10:35:50.111086] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:34.309 Executing: test_write_invalid_db 00:07:34.309 Waiting for AER completion... 00:07:34.309 Failure: test_write_invalid_db 00:07:34.309 00:07:34.309 Executing: test_invalid_db_write_overflow_sq 00:07:34.309 Waiting for AER completion... 00:07:34.309 Failure: test_invalid_db_write_overflow_sq 00:07:34.309 00:07:34.309 Executing: test_invalid_db_write_overflow_cq 00:07:34.309 Waiting for AER completion... 00:07:34.309 Failure: test_invalid_db_write_overflow_cq 00:07:34.309 00:07:34.309 10:35:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:34.309 10:35:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:34.309 [2024-11-18 10:36:00.131269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:44.279 Executing: test_write_invalid_db 00:07:44.279 Waiting for AER completion... 00:07:44.279 Failure: test_write_invalid_db 00:07:44.279 00:07:44.279 Executing: test_invalid_db_write_overflow_sq 00:07:44.279 Waiting for AER completion... 00:07:44.279 Failure: test_invalid_db_write_overflow_sq 00:07:44.279 00:07:44.279 Executing: test_invalid_db_write_overflow_cq 00:07:44.279 Waiting for AER completion... 00:07:44.279 Failure: test_invalid_db_write_overflow_cq 00:07:44.279 00:07:44.279 10:36:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:44.279 10:36:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:44.537 [2024-11-18 10:36:10.182931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 Executing: test_write_invalid_db 00:07:54.584 Waiting for AER completion... 00:07:54.584 Failure: test_write_invalid_db 00:07:54.584 00:07:54.584 Executing: test_invalid_db_write_overflow_sq 00:07:54.584 Waiting for AER completion... 00:07:54.584 Failure: test_invalid_db_write_overflow_sq 00:07:54.584 00:07:54.584 Executing: test_invalid_db_write_overflow_cq 00:07:54.584 Waiting for AER completion... 00:07:54.584 Failure: test_invalid_db_write_overflow_cq 00:07:54.584 00:07:54.584 00:07:54.584 real 0m40.182s 00:07:54.584 user 0m34.143s 00:07:54.584 sys 0m5.656s 00:07:54.584 10:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.584 10:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:54.584 ************************************ 00:07:54.584 END TEST nvme_doorbell_aers 00:07:54.584 ************************************ 00:07:54.584 10:36:20 nvme -- nvme/nvme.sh@97 -- # uname 00:07:54.584 10:36:20 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:54.584 10:36:20 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:54.584 10:36:20 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.584 10:36:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.584 10:36:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.584 ************************************ 00:07:54.584 START TEST nvme_multi_aen 00:07:54.584 ************************************ 00:07:54.584 10:36:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:54.584 [2024-11-18 10:36:20.198517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.198574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.198583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.199635] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.199655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.199663] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.200669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.200762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.200817] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.201722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.201812] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 [2024-11-18 10:36:20.201864] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:54.584 Child process pid: 63643 00:07:54.584 [Child] Asynchronous Event Request test 00:07:54.584 [Child] Attached to 0000:00:10.0 00:07:54.584 [Child] Attached to 0000:00:11.0 00:07:54.584 [Child] Attached to 0000:00:13.0 00:07:54.584 [Child] Attached to 0000:00:12.0 00:07:54.584 [Child] Registering asynchronous event callbacks... 00:07:54.584 [Child] Getting orig temperature thresholds of all controllers 00:07:54.584 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.584 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.584 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.584 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.584 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:54.584 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.584 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.584 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.584 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.585 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 [Child] Cleaning up... 00:07:54.585 Asynchronous Event Request test 00:07:54.585 Attached to 0000:00:10.0 00:07:54.585 Attached to 0000:00:11.0 00:07:54.585 Attached to 0000:00:13.0 00:07:54.585 Attached to 0000:00:12.0 00:07:54.585 Reset controller to setup AER completions for this process 00:07:54.585 Registering asynchronous event callbacks... 00:07:54.585 Getting orig temperature thresholds of all controllers 00:07:54.585 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.585 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.585 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.585 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.585 Setting all controllers temperature threshold low to trigger AER 00:07:54.585 Waiting for all controllers temperature threshold to be set lower 00:07:54.585 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.585 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:54.585 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.585 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:54.585 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.585 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:54.585 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.585 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:54.585 Waiting for all controllers to trigger AER and reset threshold 00:07:54.585 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.585 Cleaning up... 00:07:54.585 00:07:54.585 real 0m0.434s 00:07:54.585 user 0m0.149s 00:07:54.585 sys 0m0.179s 00:07:54.585 10:36:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.585 10:36:20 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:54.585 ************************************ 00:07:54.585 END TEST nvme_multi_aen 00:07:54.585 ************************************ 00:07:54.844 10:36:20 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:54.844 10:36:20 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.844 10:36:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.844 10:36:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 ************************************ 00:07:54.844 START TEST nvme_startup 00:07:54.844 ************************************ 00:07:54.844 10:36:20 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:54.844 Initializing NVMe Controllers 00:07:54.844 Attached to 0000:00:10.0 00:07:54.844 Attached to 0000:00:11.0 00:07:54.844 Attached to 0000:00:13.0 00:07:54.844 Attached to 0000:00:12.0 00:07:54.844 Initialization complete. 00:07:54.844 Time used:143607.703 (us). 00:07:54.844 ************************************ 00:07:54.844 END TEST nvme_startup 00:07:54.844 ************************************ 00:07:54.844 00:07:54.844 real 0m0.204s 00:07:54.844 user 0m0.058s 00:07:54.844 sys 0m0.102s 00:07:54.844 10:36:20 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.844 10:36:20 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:55.103 10:36:20 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:55.103 10:36:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.103 10:36:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.103 10:36:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.103 ************************************ 00:07:55.103 START TEST nvme_multi_secondary 00:07:55.103 ************************************ 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63694 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63695 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:55.103 10:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:58.389 Initializing NVMe Controllers 00:07:58.389 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.389 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.389 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.389 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.389 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:58.389 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:58.389 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:58.389 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:58.389 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:58.389 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:58.389 Initialization complete. Launching workers. 00:07:58.389 ======================================================== 00:07:58.389 Latency(us) 00:07:58.389 Device Information : IOPS MiB/s Average min max 00:07:58.389 PCIE (0000:00:10.0) NSID 1 from core 2: 3143.05 12.28 5087.62 781.35 14036.20 00:07:58.389 PCIE (0000:00:11.0) NSID 1 from core 2: 3143.05 12.28 5090.58 797.21 13110.71 00:07:58.389 PCIE (0000:00:13.0) NSID 1 from core 2: 3143.05 12.28 5090.12 786.10 12653.84 00:07:58.389 PCIE (0000:00:12.0) NSID 1 from core 2: 3143.05 12.28 5090.14 788.19 12604.27 00:07:58.390 PCIE (0000:00:12.0) NSID 2 from core 2: 3143.05 12.28 5090.77 779.85 14518.70 00:07:58.390 PCIE (0000:00:12.0) NSID 3 from core 2: 3143.05 12.28 5090.77 782.56 14443.79 00:07:58.390 ======================================================== 00:07:58.390 Total : 18858.33 73.67 5090.00 779.85 14518.70 00:07:58.390 00:07:58.390 10:36:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63694 00:07:58.390 Initializing NVMe Controllers 00:07:58.390 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.390 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.390 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.390 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.390 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:58.390 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:58.390 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:58.390 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:58.390 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:58.390 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:58.390 Initialization complete. Launching workers. 00:07:58.390 ======================================================== 00:07:58.390 Latency(us) 00:07:58.390 Device Information : IOPS MiB/s Average min max 00:07:58.390 PCIE (0000:00:10.0) NSID 1 from core 1: 7873.92 30.76 2030.68 718.24 7093.41 00:07:58.390 PCIE (0000:00:11.0) NSID 1 from core 1: 7873.92 30.76 2031.61 738.06 6780.75 00:07:58.390 PCIE (0000:00:13.0) NSID 1 from core 1: 7873.92 30.76 2031.57 745.01 5811.77 00:07:58.390 PCIE (0000:00:12.0) NSID 1 from core 1: 7873.92 30.76 2031.63 749.36 5431.52 00:07:58.390 PCIE (0000:00:12.0) NSID 2 from core 1: 7873.92 30.76 2031.59 744.93 5905.88 00:07:58.390 PCIE (0000:00:12.0) NSID 3 from core 1: 7873.92 30.76 2031.56 725.07 6550.41 00:07:58.390 ======================================================== 00:07:58.390 Total : 47243.54 184.55 2031.44 718.24 7093.41 00:07:58.390 00:08:00.291 Initializing NVMe Controllers 00:08:00.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.291 Initialization complete. Launching workers. 00:08:00.291 ======================================================== 00:08:00.291 Latency(us) 00:08:00.291 Device Information : IOPS MiB/s Average min max 00:08:00.291 PCIE (0000:00:10.0) NSID 1 from core 0: 11426.35 44.63 1399.11 668.35 5815.35 00:08:00.291 PCIE (0000:00:11.0) NSID 1 from core 0: 11426.35 44.63 1399.90 681.71 5773.25 00:08:00.291 PCIE (0000:00:13.0) NSID 1 from core 0: 11426.35 44.63 1399.88 651.48 5877.59 00:08:00.291 PCIE (0000:00:12.0) NSID 1 from core 0: 11426.35 44.63 1399.87 634.41 5880.78 00:08:00.291 PCIE (0000:00:12.0) NSID 2 from core 0: 11426.35 44.63 1399.85 618.73 6019.22 00:08:00.291 PCIE (0000:00:12.0) NSID 3 from core 0: 11426.35 44.63 1399.83 577.12 5865.43 00:08:00.291 ======================================================== 00:08:00.291 Total : 68558.10 267.81 1399.74 577.12 6019.22 00:08:00.291 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63695 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63764 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63765 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:00.291 10:36:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:03.577 Initializing NVMe Controllers 00:08:03.577 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.577 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.577 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.577 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.577 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:03.577 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:03.577 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:03.577 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:03.577 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:03.577 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:03.577 Initialization complete. Launching workers. 00:08:03.577 ======================================================== 00:08:03.577 Latency(us) 00:08:03.577 Device Information : IOPS MiB/s Average min max 00:08:03.577 PCIE (0000:00:10.0) NSID 1 from core 0: 7492.65 29.27 2134.10 720.96 5718.89 00:08:03.577 PCIE (0000:00:11.0) NSID 1 from core 0: 7492.65 29.27 2135.08 734.43 5910.64 00:08:03.577 PCIE (0000:00:13.0) NSID 1 from core 0: 7492.65 29.27 2135.10 730.40 6331.20 00:08:03.577 PCIE (0000:00:12.0) NSID 1 from core 0: 7492.65 29.27 2135.13 734.09 5930.95 00:08:03.577 PCIE (0000:00:12.0) NSID 2 from core 0: 7492.65 29.27 2135.13 743.77 5861.07 00:08:03.577 PCIE (0000:00:12.0) NSID 3 from core 0: 7492.65 29.27 2135.34 735.56 6101.68 00:08:03.577 ======================================================== 00:08:03.577 Total : 44955.90 175.61 2134.98 720.96 6331.20 00:08:03.577 00:08:03.835 Initializing NVMe Controllers 00:08:03.835 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.835 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.835 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.835 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.835 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:03.835 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:03.835 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:03.835 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:03.835 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:03.835 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:03.835 Initialization complete. Launching workers. 00:08:03.835 ======================================================== 00:08:03.835 Latency(us) 00:08:03.835 Device Information : IOPS MiB/s Average min max 00:08:03.835 PCIE (0000:00:10.0) NSID 1 from core 1: 7603.51 29.70 2102.97 727.17 7643.38 00:08:03.835 PCIE (0000:00:11.0) NSID 1 from core 1: 7603.51 29.70 2103.87 749.06 7642.99 00:08:03.835 PCIE (0000:00:13.0) NSID 1 from core 1: 7603.51 29.70 2103.81 748.78 7032.22 00:08:03.835 PCIE (0000:00:12.0) NSID 1 from core 1: 7603.51 29.70 2103.75 739.20 7052.66 00:08:03.835 PCIE (0000:00:12.0) NSID 2 from core 1: 7603.51 29.70 2103.70 744.88 7530.97 00:08:03.835 PCIE (0000:00:12.0) NSID 3 from core 1: 7603.51 29.70 2103.73 738.22 7299.63 00:08:03.835 ======================================================== 00:08:03.835 Total : 45621.05 178.21 2103.64 727.17 7643.38 00:08:03.835 00:08:05.741 Initializing NVMe Controllers 00:08:05.741 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.741 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.741 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.741 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.741 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:05.741 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:05.741 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:05.741 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:05.741 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:05.741 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:05.741 Initialization complete. Launching workers. 00:08:05.741 ======================================================== 00:08:05.741 Latency(us) 00:08:05.741 Device Information : IOPS MiB/s Average min max 00:08:05.741 PCIE (0000:00:10.0) NSID 1 from core 2: 4556.18 17.80 3507.41 734.33 13209.02 00:08:05.742 PCIE (0000:00:11.0) NSID 1 from core 2: 4556.18 17.80 3508.32 734.00 12674.47 00:08:05.742 PCIE (0000:00:13.0) NSID 1 from core 2: 4556.18 17.80 3508.09 760.01 12533.71 00:08:05.742 PCIE (0000:00:12.0) NSID 1 from core 2: 4556.18 17.80 3508.21 755.46 12881.30 00:08:05.742 PCIE (0000:00:12.0) NSID 2 from core 2: 4556.18 17.80 3507.98 754.62 13402.75 00:08:05.742 PCIE (0000:00:12.0) NSID 3 from core 2: 4556.18 17.80 3508.28 632.42 13164.27 00:08:05.742 ======================================================== 00:08:05.742 Total : 27337.10 106.79 3508.05 632.42 13402.75 00:08:05.742 00:08:05.742 ************************************ 00:08:05.742 END TEST nvme_multi_secondary 00:08:05.742 ************************************ 00:08:05.742 10:36:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63764 00:08:05.742 10:36:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63765 00:08:05.742 00:08:05.742 real 0m10.689s 00:08:05.742 user 0m18.370s 00:08:05.742 sys 0m0.652s 00:08:05.742 10:36:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.742 10:36:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 10:36:31 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:05.742 10:36:31 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62727 ]] 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1094 -- # kill 62727 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1095 -- # wait 62727 00:08:05.742 [2024-11-18 10:36:31.464912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.464984] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.465014] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.465032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.467161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.467215] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.467230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.467244] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.468974] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.469014] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.469026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.469039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.470767] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.470809] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.470821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.470834] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63642) is not found. Dropping the request. 00:08:05.742 [2024-11-18 10:36:31.581592] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:05.742 10:36:31 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.742 10:36:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 ************************************ 00:08:05.742 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:05.742 ************************************ 00:08:05.742 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:06.004 * Looking for test storage... 00:08:06.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.004 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:06.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63932 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63932 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63932 ']' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.005 10:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 [2024-11-18 10:36:31.913619] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:06.267 [2024-11-18 10:36:31.913768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63932 ] 00:08:06.267 [2024-11-18 10:36:32.087661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.528 [2024-11-18 10:36:32.211387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.528 [2024-11-18 10:36:32.211669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.528 [2024-11-18 10:36:32.212008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.528 [2024-11-18 10:36:32.212014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:07.101 nvme0n1 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.101 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:07.363 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_LJKJ9.txt 00:08:07.363 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:07.363 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.363 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:07.363 true 00:08:07.363 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.364 10:36:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:07.364 10:36:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731926192 00:08:07.364 10:36:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63955 00:08:07.364 10:36:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:07.364 10:36:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:07.364 10:36:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:09.277 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:09.277 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.277 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:09.277 [2024-11-18 10:36:35.010616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:09.277 [2024-11-18 10:36:35.010955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:09.277 [2024-11-18 10:36:35.010984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:09.277 [2024-11-18 10:36:35.010997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.277 [2024-11-18 10:36:35.012834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:09.277 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63955 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63955 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63955 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_LJKJ9.txt 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_LJKJ9.txt 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63932 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63932 ']' 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63932 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.278 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63932 00:08:09.279 killing process with pid 63932 00:08:09.279 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.279 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.279 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63932' 00:08:09.279 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63932 00:08:09.279 10:36:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63932 00:08:10.656 10:36:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:10.656 10:36:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:10.656 ************************************ 00:08:10.656 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:10.656 ************************************ 00:08:10.656 00:08:10.656 real 0m4.720s 00:08:10.656 user 0m16.562s 00:08:10.656 sys 0m0.610s 00:08:10.656 10:36:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.656 10:36:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:10.656 10:36:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:10.656 10:36:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:10.656 10:36:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.656 10:36:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.656 10:36:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.656 ************************************ 00:08:10.656 START TEST nvme_fio 00:08:10.656 ************************************ 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:10.656 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:10.656 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.914 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.914 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:11.173 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:11.173 10:36:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:11.173 10:36:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:11.173 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:11.173 fio-3.35 00:08:11.173 Starting 1 thread 00:08:17.766 00:08:17.766 test: (groupid=0, jobs=1): err= 0: pid=64089: Mon Nov 18 10:36:42 2024 00:08:17.766 read: IOPS=21.3k, BW=83.2MiB/s (87.3MB/s)(167MiB/2001msec) 00:08:17.766 slat (usec): min=3, max=277, avg= 5.09, stdev= 2.90 00:08:17.766 clat (usec): min=271, max=10530, avg=3001.44, stdev=1138.10 00:08:17.766 lat (usec): min=276, max=10577, avg=3006.53, stdev=1139.50 00:08:17.766 clat percentiles (usec): 00:08:17.766 | 1.00th=[ 1532], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2311], 00:08:17.766 | 30.00th=[ 2376], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2737], 00:08:17.766 | 70.00th=[ 2966], 80.00th=[ 3490], 90.00th=[ 4752], 95.00th=[ 5604], 00:08:17.766 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 9372], 00:08:17.766 | 99.99th=[10421] 00:08:17.766 bw ( KiB/s): min=78344, max=92072, per=99.98%, avg=85208.25, stdev=5610.81, samples=4 00:08:17.766 iops : min=19586, max=23018, avg=21302.00, stdev=1402.71, samples=4 00:08:17.766 write: IOPS=21.2k, BW=82.6MiB/s (86.7MB/s)(165MiB/2001msec); 0 zone resets 00:08:17.766 slat (usec): min=3, max=851, avg= 5.29, stdev= 5.03 00:08:17.766 clat (usec): min=227, max=10450, avg=3002.64, stdev=1128.70 00:08:17.766 lat (usec): min=233, max=10469, avg=3007.92, stdev=1130.06 00:08:17.766 clat percentiles (usec): 00:08:17.766 | 1.00th=[ 1500], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2311], 00:08:17.766 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2606], 60.00th=[ 2737], 00:08:17.766 | 70.00th=[ 2966], 80.00th=[ 3458], 90.00th=[ 4752], 95.00th=[ 5538], 00:08:17.766 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 9241], 99.95th=[ 9503], 00:08:17.766 | 99.99th=[10290] 00:08:17.766 bw ( KiB/s): min=75584, max=91887, per=100.00%, avg=84634.50, stdev=6740.64, samples=4 00:08:17.766 iops : min=18896, max=22971, avg=21158.25, stdev=1684.85, samples=4 00:08:17.766 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.08% 00:08:17.766 lat (msec) : 2=3.37%, 4=81.45%, 10=15.06%, 20=0.01% 00:08:17.766 cpu : usr=98.65%, sys=0.15%, ctx=28, majf=0, minf=607 00:08:17.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:17.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:17.766 issued rwts: total=42633,42331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:17.766 00:08:17.766 Run status group 0 (all jobs): 00:08:17.766 READ: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2001-2001msec 00:08:17.766 WRITE: bw=82.6MiB/s (86.7MB/s), 82.6MiB/s-82.6MiB/s (86.7MB/s-86.7MB/s), io=165MiB (173MB), run=2001-2001msec 00:08:17.766 ----------------------------------------------------- 00:08:17.766 Suppressions used: 00:08:17.766 count bytes template 00:08:17.766 1 32 /usr/src/fio/parse.c 00:08:17.766 1 8 libtcmalloc_minimal.so 00:08:17.766 ----------------------------------------------------- 00:08:17.766 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:17.766 10:36:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:17.766 10:36:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:17.766 10:36:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:17.766 10:36:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:17.766 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:17.766 fio-3.35 00:08:17.766 Starting 1 thread 00:08:24.345 00:08:24.345 test: (groupid=0, jobs=1): err= 0: pid=64144: Mon Nov 18 10:36:49 2024 00:08:24.345 read: IOPS=21.4k, BW=83.5MiB/s (87.6MB/s)(167MiB/2001msec) 00:08:24.345 slat (nsec): min=4217, max=66248, avg=5090.19, stdev=2265.37 00:08:24.345 clat (usec): min=290, max=10353, avg=2986.25, stdev=1041.89 00:08:24.345 lat (usec): min=294, max=10365, avg=2991.34, stdev=1043.02 00:08:24.345 clat percentiles (usec): 00:08:24.345 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:08:24.345 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2737], 00:08:24.345 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4424], 95.00th=[ 5407], 00:08:24.345 | 99.00th=[ 6915], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[ 9896], 00:08:24.345 | 99.99th=[10159] 00:08:24.345 bw ( KiB/s): min=74656, max=89704, per=95.93%, avg=82053.33, stdev=7527.20, samples=3 00:08:24.345 iops : min=18664, max=22426, avg=20513.33, stdev=1881.80, samples=3 00:08:24.345 write: IOPS=21.2k, BW=82.9MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:08:24.345 slat (nsec): min=4270, max=61554, avg=5231.84, stdev=2273.08 00:08:24.345 clat (usec): min=344, max=10292, avg=2997.38, stdev=1030.91 00:08:24.345 lat (usec): min=349, max=10305, avg=3002.62, stdev=1032.01 00:08:24.345 clat percentiles (usec): 00:08:24.345 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:08:24.345 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2769], 00:08:24.345 | 70.00th=[ 2933], 80.00th=[ 3228], 90.00th=[ 4424], 95.00th=[ 5407], 00:08:24.345 | 99.00th=[ 6849], 99.50th=[ 7504], 99.90th=[ 9503], 99.95th=[ 9765], 00:08:24.345 | 99.99th=[10028] 00:08:24.345 bw ( KiB/s): min=75144, max=89736, per=96.74%, avg=82120.00, stdev=7317.02, samples=3 00:08:24.345 iops : min=18786, max=22434, avg=20530.00, stdev=1829.26, samples=3 00:08:24.345 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:08:24.345 lat (msec) : 2=1.27%, 4=85.87%, 10=12.77%, 20=0.02% 00:08:24.345 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=607 00:08:24.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:24.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:24.345 issued rwts: total=42787,42466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:24.345 00:08:24.345 Run status group 0 (all jobs): 00:08:24.345 READ: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:08:24.345 WRITE: bw=82.9MiB/s (86.9MB/s), 82.9MiB/s-82.9MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:08:24.345 ----------------------------------------------------- 00:08:24.345 Suppressions used: 00:08:24.345 count bytes template 00:08:24.345 1 32 /usr/src/fio/parse.c 00:08:24.345 1 8 libtcmalloc_minimal.so 00:08:24.345 ----------------------------------------------------- 00:08:24.345 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:24.345 10:36:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:24.345 10:36:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:24.345 10:36:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:24.345 10:36:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:24.606 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:24.606 fio-3.35 00:08:24.606 Starting 1 thread 00:08:29.875 00:08:29.875 test: (groupid=0, jobs=1): err= 0: pid=64205: Mon Nov 18 10:36:55 2024 00:08:29.875 read: IOPS=17.0k, BW=66.4MiB/s (69.6MB/s)(133MiB/2001msec) 00:08:29.875 slat (nsec): min=4235, max=73681, avg=5802.52, stdev=3137.97 00:08:29.875 clat (usec): min=902, max=9818, avg=3728.87, stdev=1302.00 00:08:29.875 lat (usec): min=908, max=9823, avg=3734.67, stdev=1303.16 00:08:29.875 clat percentiles (usec): 00:08:29.875 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2638], 00:08:29.875 | 30.00th=[ 2802], 40.00th=[ 2999], 50.00th=[ 3261], 60.00th=[ 3654], 00:08:29.875 | 70.00th=[ 4293], 80.00th=[ 4948], 90.00th=[ 5735], 95.00th=[ 6259], 00:08:29.875 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 9110], 00:08:29.875 | 99.99th=[ 9503] 00:08:29.875 bw ( KiB/s): min=58784, max=70720, per=97.53%, avg=66322.67, stdev=6558.81, samples=3 00:08:29.875 iops : min=14696, max=17680, avg=16580.67, stdev=1639.70, samples=3 00:08:29.875 write: IOPS=17.0k, BW=66.6MiB/s (69.8MB/s)(133MiB/2001msec); 0 zone resets 00:08:29.875 slat (nsec): min=4301, max=80538, avg=5908.30, stdev=3215.38 00:08:29.875 clat (usec): min=617, max=9562, avg=3766.43, stdev=1307.49 00:08:29.875 lat (usec): min=635, max=9568, avg=3772.34, stdev=1308.67 00:08:29.875 clat percentiles (usec): 00:08:29.875 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:08:29.875 | 30.00th=[ 2835], 40.00th=[ 3032], 50.00th=[ 3294], 60.00th=[ 3654], 00:08:29.875 | 70.00th=[ 4293], 80.00th=[ 5014], 90.00th=[ 5735], 95.00th=[ 6325], 00:08:29.875 | 99.00th=[ 7373], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 9110], 00:08:29.875 | 99.99th=[ 9372] 00:08:29.875 bw ( KiB/s): min=59064, max=70856, per=97.18%, avg=66237.33, stdev=6297.42, samples=3 00:08:29.875 iops : min=14766, max=17714, avg=16559.33, stdev=1574.36, samples=3 00:08:29.875 lat (usec) : 750=0.01%, 1000=0.01% 00:08:29.875 lat (msec) : 2=0.84%, 4=64.80%, 10=34.34% 00:08:29.875 cpu : usr=98.85%, sys=0.00%, ctx=4, majf=0, minf=607 00:08:29.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:29.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.875 issued rwts: total=34018,34095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.876 00:08:29.876 Run status group 0 (all jobs): 00:08:29.876 READ: bw=66.4MiB/s (69.6MB/s), 66.4MiB/s-66.4MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2001-2001msec 00:08:29.876 WRITE: bw=66.6MiB/s (69.8MB/s), 66.6MiB/s-66.6MiB/s (69.8MB/s-69.8MB/s), io=133MiB (140MB), run=2001-2001msec 00:08:30.134 ----------------------------------------------------- 00:08:30.134 Suppressions used: 00:08:30.134 count bytes template 00:08:30.134 1 32 /usr/src/fio/parse.c 00:08:30.134 1 8 libtcmalloc_minimal.so 00:08:30.134 ----------------------------------------------------- 00:08:30.134 00:08:30.134 10:36:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:30.134 10:36:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:30.134 10:36:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:30.134 10:36:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:30.393 10:36:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:30.393 10:36:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:30.651 10:36:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:30.651 10:36:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:30.651 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:30.651 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:30.651 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:30.652 10:36:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:30.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:30.909 fio-3.35 00:08:30.909 Starting 1 thread 00:08:39.018 00:08:39.018 test: (groupid=0, jobs=1): err= 0: pid=64260: Mon Nov 18 10:37:03 2024 00:08:39.018 read: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(127MiB/2001msec) 00:08:39.018 slat (nsec): min=4244, max=86121, avg=6052.87, stdev=3602.96 00:08:39.018 clat (usec): min=724, max=10337, avg=3895.16, stdev=1433.30 00:08:39.018 lat (usec): min=760, max=10374, avg=3901.21, stdev=1434.76 00:08:39.018 clat percentiles (usec): 00:08:39.018 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:08:39.018 | 30.00th=[ 2802], 40.00th=[ 3032], 50.00th=[ 3425], 60.00th=[ 3982], 00:08:39.018 | 70.00th=[ 4621], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6652], 00:08:39.018 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 9634], 00:08:39.018 | 99.99th=[10290] 00:08:39.018 bw ( KiB/s): min=60992, max=69080, per=98.64%, avg=64146.67, stdev=4327.43, samples=3 00:08:39.018 iops : min=15248, max=17270, avg=16036.67, stdev=1081.86, samples=3 00:08:39.018 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec); 0 zone resets 00:08:39.018 slat (nsec): min=4304, max=81638, avg=6180.24, stdev=3596.39 00:08:39.018 clat (usec): min=1165, max=10263, avg=3941.89, stdev=1428.40 00:08:39.018 lat (usec): min=1177, max=10284, avg=3948.07, stdev=1429.92 00:08:39.018 clat percentiles (usec): 00:08:39.018 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:39.018 | 30.00th=[ 2835], 40.00th=[ 3097], 50.00th=[ 3458], 60.00th=[ 4080], 00:08:39.018 | 70.00th=[ 4686], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6652], 00:08:39.018 | 99.00th=[ 7570], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[ 9634], 00:08:39.018 | 99.99th=[10159] 00:08:39.018 bw ( KiB/s): min=60384, max=68512, per=98.03%, avg=63888.00, stdev=4178.15, samples=3 00:08:39.018 iops : min=15096, max=17128, avg=15972.00, stdev=1044.54, samples=3 00:08:39.018 lat (usec) : 750=0.01% 00:08:39.018 lat (msec) : 2=0.46%, 4=59.04%, 10=40.47%, 20=0.02% 00:08:39.018 cpu : usr=98.75%, sys=0.10%, ctx=2, majf=0, minf=605 00:08:39.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:39.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.018 issued rwts: total=32530,32603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.018 00:08:39.018 Run status group 0 (all jobs): 00:08:39.018 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:39.018 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (134MB), run=2001-2001msec 00:08:39.018 ----------------------------------------------------- 00:08:39.018 Suppressions used: 00:08:39.018 count bytes template 00:08:39.018 1 32 /usr/src/fio/parse.c 00:08:39.018 1 8 libtcmalloc_minimal.so 00:08:39.018 ----------------------------------------------------- 00:08:39.018 00:08:39.018 10:37:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:39.018 10:37:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:39.018 00:08:39.018 real 0m27.411s 00:08:39.018 user 0m16.801s 00:08:39.018 sys 0m18.948s 00:08:39.018 10:37:03 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.018 ************************************ 00:08:39.018 END TEST nvme_fio 00:08:39.018 ************************************ 00:08:39.018 10:37:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:39.018 ************************************ 00:08:39.018 END TEST nvme 00:08:39.018 ************************************ 00:08:39.018 00:08:39.019 real 1m36.328s 00:08:39.019 user 3m37.065s 00:08:39.019 sys 0m29.356s 00:08:39.019 10:37:03 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.019 10:37:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.019 10:37:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:39.019 10:37:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:39.019 10:37:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.019 10:37:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.019 10:37:03 -- common/autotest_common.sh@10 -- # set +x 00:08:39.019 ************************************ 00:08:39.019 START TEST nvme_scc 00:08:39.019 ************************************ 00:08:39.019 10:37:03 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:39.019 * Looking for test storage... 00:08:39.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:39.019 10:37:03 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.019 10:37:03 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.019 10:37:03 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.019 --rc genhtml_branch_coverage=1 00:08:39.019 --rc genhtml_function_coverage=1 00:08:39.019 --rc genhtml_legend=1 00:08:39.019 --rc geninfo_all_blocks=1 00:08:39.019 --rc geninfo_unexecuted_blocks=1 00:08:39.019 00:08:39.019 ' 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.019 --rc genhtml_branch_coverage=1 00:08:39.019 --rc genhtml_function_coverage=1 00:08:39.019 --rc genhtml_legend=1 00:08:39.019 --rc geninfo_all_blocks=1 00:08:39.019 --rc geninfo_unexecuted_blocks=1 00:08:39.019 00:08:39.019 ' 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.019 --rc genhtml_branch_coverage=1 00:08:39.019 --rc genhtml_function_coverage=1 00:08:39.019 --rc genhtml_legend=1 00:08:39.019 --rc geninfo_all_blocks=1 00:08:39.019 --rc geninfo_unexecuted_blocks=1 00:08:39.019 00:08:39.019 ' 00:08:39.019 10:37:04 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.019 --rc genhtml_branch_coverage=1 00:08:39.019 --rc genhtml_function_coverage=1 00:08:39.019 --rc genhtml_legend=1 00:08:39.019 --rc geninfo_all_blocks=1 00:08:39.019 --rc geninfo_unexecuted_blocks=1 00:08:39.019 00:08:39.019 ' 00:08:39.019 10:37:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.019 10:37:04 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.019 10:37:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.019 10:37:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.019 10:37:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.019 10:37:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:39.019 10:37:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:39.019 10:37:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:39.019 10:37:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.019 10:37:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:39.019 10:37:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:39.019 10:37:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:39.019 10:37:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.019 Waiting for block devices as requested 00:08:39.019 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.019 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.019 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.019 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.308 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:44.308 10:37:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:44.308 10:37:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.308 10:37:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:44.308 10:37:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.308 10:37:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.308 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.309 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.310 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.311 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.312 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.313 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:44.314 10:37:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.314 10:37:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:44.314 10:37:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.314 10:37:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.314 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.315 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:44.316 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.317 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.318 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:44.319 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:44.320 10:37:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.320 10:37:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:44.320 10:37:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.320 10:37:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.320 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.321 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.322 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:44.323 10:37:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:44.324 10:37:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.324 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.325 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.326 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:44.327 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.328 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:44.329 10:37:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:44.329 10:37:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:44.329 10:37:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:44.329 10:37:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:44.329 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:44.330 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.331 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.332 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:44.333 10:37:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:44.333 10:37:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:44.334 10:37:10 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:44.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:45.164 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:45.164 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:45.164 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:45.425 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:45.425 10:37:11 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:45.425 10:37:11 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:45.425 10:37:11 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.425 10:37:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:45.425 ************************************ 00:08:45.425 START TEST nvme_simple_copy 00:08:45.425 ************************************ 00:08:45.425 10:37:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:45.685 Initializing NVMe Controllers 00:08:45.685 Attaching to 0000:00:10.0 00:08:45.685 Controller supports SCC. Attached to 0000:00:10.0 00:08:45.685 Namespace ID: 1 size: 6GB 00:08:45.685 Initialization complete. 00:08:45.685 00:08:45.685 Controller QEMU NVMe Ctrl (12340 ) 00:08:45.685 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:45.685 Namespace Block Size:4096 00:08:45.685 Writing LBAs 0 to 63 with Random Data 00:08:45.685 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:45.685 LBAs matching Written Data: 64 00:08:45.685 00:08:45.685 real 0m0.264s 00:08:45.685 user 0m0.103s 00:08:45.685 sys 0m0.059s 00:08:45.685 10:37:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.685 10:37:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.685 ************************************ 00:08:45.685 END TEST nvme_simple_copy 00:08:45.685 ************************************ 00:08:45.685 ************************************ 00:08:45.685 END TEST nvme_scc 00:08:45.685 ************************************ 00:08:45.685 00:08:45.685 real 0m7.526s 00:08:45.685 user 0m1.019s 00:08:45.685 sys 0m1.346s 00:08:45.685 10:37:11 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.685 10:37:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:45.685 10:37:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:45.685 10:37:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:45.685 10:37:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:45.685 10:37:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:45.685 10:37:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:45.685 10:37:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.685 10:37:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.685 10:37:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.685 ************************************ 00:08:45.685 START TEST nvme_fdp 00:08:45.685 ************************************ 00:08:45.685 10:37:11 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:45.685 * Looking for test storage... 00:08:45.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:45.685 10:37:11 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.685 10:37:11 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.685 10:37:11 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.945 --rc genhtml_branch_coverage=1 00:08:45.945 --rc genhtml_function_coverage=1 00:08:45.945 --rc genhtml_legend=1 00:08:45.945 --rc geninfo_all_blocks=1 00:08:45.945 --rc geninfo_unexecuted_blocks=1 00:08:45.945 00:08:45.945 ' 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.945 --rc genhtml_branch_coverage=1 00:08:45.945 --rc genhtml_function_coverage=1 00:08:45.945 --rc genhtml_legend=1 00:08:45.945 --rc geninfo_all_blocks=1 00:08:45.945 --rc geninfo_unexecuted_blocks=1 00:08:45.945 00:08:45.945 ' 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.945 --rc genhtml_branch_coverage=1 00:08:45.945 --rc genhtml_function_coverage=1 00:08:45.945 --rc genhtml_legend=1 00:08:45.945 --rc geninfo_all_blocks=1 00:08:45.945 --rc geninfo_unexecuted_blocks=1 00:08:45.945 00:08:45.945 ' 00:08:45.945 10:37:11 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.945 --rc genhtml_branch_coverage=1 00:08:45.945 --rc genhtml_function_coverage=1 00:08:45.945 --rc genhtml_legend=1 00:08:45.945 --rc geninfo_all_blocks=1 00:08:45.945 --rc geninfo_unexecuted_blocks=1 00:08:45.945 00:08:45.945 ' 00:08:45.945 10:37:11 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.945 10:37:11 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.945 10:37:11 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.945 10:37:11 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.945 10:37:11 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.945 10:37:11 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:45.945 10:37:11 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:45.945 10:37:11 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:45.945 10:37:11 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.945 10:37:11 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.206 Waiting for block devices as requested 00:08:46.206 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.467 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.467 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.737 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:51.737 10:37:17 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:51.737 10:37:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:51.737 10:37:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:51.737 10:37:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:51.737 10:37:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:51.737 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.738 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.739 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:51.740 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.741 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:51.742 10:37:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:51.742 10:37:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:51.742 10:37:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:51.742 10:37:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.742 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.743 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.744 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:51.745 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:51.746 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:51.747 10:37:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:51.747 10:37:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:51.747 10:37:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:51.747 10:37:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:51.747 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.748 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.749 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:51.750 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.751 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.752 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:51.753 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:51.754 10:37:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:51.755 10:37:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:51.755 10:37:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:51.755 10:37:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:51.755 10:37:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:51.755 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.756 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.757 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:51.758 10:37:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:51.758 10:37:17 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:51.758 10:37:17 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.581 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.581 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.581 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.838 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.838 10:37:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:52.838 10:37:18 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.838 10:37:18 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.838 10:37:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:52.838 ************************************ 00:08:52.838 START TEST nvme_flexible_data_placement 00:08:52.838 ************************************ 00:08:52.838 10:37:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:53.096 Initializing NVMe Controllers 00:08:53.096 Attaching to 0000:00:13.0 00:08:53.096 Controller supports FDP Attached to 0000:00:13.0 00:08:53.096 Namespace ID: 1 Endurance Group ID: 1 00:08:53.096 Initialization complete. 00:08:53.096 00:08:53.096 ================================== 00:08:53.096 == FDP tests for Namespace: #01 == 00:08:53.096 ================================== 00:08:53.096 00:08:53.096 Get Feature: FDP: 00:08:53.096 ================= 00:08:53.096 Enabled: Yes 00:08:53.096 FDP configuration Index: 0 00:08:53.096 00:08:53.096 FDP configurations log page 00:08:53.096 =========================== 00:08:53.096 Number of FDP configurations: 1 00:08:53.096 Version: 0 00:08:53.096 Size: 112 00:08:53.096 FDP Configuration Descriptor: 0 00:08:53.096 Descriptor Size: 96 00:08:53.096 Reclaim Group Identifier format: 2 00:08:53.096 FDP Volatile Write Cache: Not Present 00:08:53.096 FDP Configuration: Valid 00:08:53.096 Vendor Specific Size: 0 00:08:53.096 Number of Reclaim Groups: 2 00:08:53.096 Number of Recalim Unit Handles: 8 00:08:53.096 Max Placement Identifiers: 128 00:08:53.096 Number of Namespaces Suppprted: 256 00:08:53.096 Reclaim unit Nominal Size: 6000000 bytes 00:08:53.096 Estimated Reclaim Unit Time Limit: Not Reported 00:08:53.096 RUH Desc #000: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #001: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #002: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #003: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #004: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #005: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #006: RUH Type: Initially Isolated 00:08:53.096 RUH Desc #007: RUH Type: Initially Isolated 00:08:53.096 00:08:53.096 FDP reclaim unit handle usage log page 00:08:53.096 ====================================== 00:08:53.096 Number of Reclaim Unit Handles: 8 00:08:53.096 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:53.096 RUH Usage Desc #001: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #002: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #003: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #004: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #005: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #006: RUH Attributes: Unused 00:08:53.096 RUH Usage Desc #007: RUH Attributes: Unused 00:08:53.096 00:08:53.096 FDP statistics log page 00:08:53.096 ======================= 00:08:53.096 Host bytes with metadata written: 979984384 00:08:53.096 Media bytes with metadata written: 980111360 00:08:53.096 Media bytes erased: 0 00:08:53.096 00:08:53.096 FDP Reclaim unit handle status 00:08:53.096 ============================== 00:08:53.096 Number of RUHS descriptors: 2 00:08:53.096 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000196a 00:08:53.096 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:53.096 00:08:53.096 FDP write on placement id: 0 success 00:08:53.096 00:08:53.096 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:53.096 00:08:53.096 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:53.096 00:08:53.096 Get Feature: FDP Events for Placement handle: #0 00:08:53.097 ======================== 00:08:53.097 Number of FDP Events: 6 00:08:53.097 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:53.097 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:53.097 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:53.097 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:53.097 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:53.097 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:53.097 00:08:53.097 FDP events log page 00:08:53.097 =================== 00:08:53.097 Number of FDP events: 1 00:08:53.097 FDP Event #0: 00:08:53.097 Event Type: RU Not Written to Capacity 00:08:53.097 Placement Identifier: Valid 00:08:53.097 NSID: Valid 00:08:53.097 Location: Valid 00:08:53.097 Placement Identifier: 0 00:08:53.097 Event Timestamp: 5 00:08:53.097 Namespace Identifier: 1 00:08:53.097 Reclaim Group Identifier: 0 00:08:53.097 Reclaim Unit Handle Identifier: 0 00:08:53.097 00:08:53.097 FDP test passed 00:08:53.097 00:08:53.097 real 0m0.239s 00:08:53.097 user 0m0.082s 00:08:53.097 sys 0m0.055s 00:08:53.097 10:37:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.097 ************************************ 00:08:53.097 END TEST nvme_flexible_data_placement 00:08:53.097 ************************************ 00:08:53.097 10:37:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:53.097 ************************************ 00:08:53.097 END TEST nvme_fdp 00:08:53.097 ************************************ 00:08:53.097 00:08:53.097 real 0m7.376s 00:08:53.097 user 0m1.007s 00:08:53.097 sys 0m1.270s 00:08:53.097 10:37:18 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.097 10:37:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:53.097 10:37:18 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:53.097 10:37:18 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:53.097 10:37:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.097 10:37:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.097 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:08:53.097 ************************************ 00:08:53.097 START TEST nvme_rpc 00:08:53.097 ************************************ 00:08:53.097 10:37:18 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:53.097 * Looking for test storage... 00:08:53.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.097 10:37:18 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.097 10:37:18 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.097 10:37:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.355 10:37:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:53.355 10:37:18 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.355 10:37:19 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.355 --rc genhtml_branch_coverage=1 00:08:53.355 --rc genhtml_function_coverage=1 00:08:53.355 --rc genhtml_legend=1 00:08:53.355 --rc geninfo_all_blocks=1 00:08:53.355 --rc geninfo_unexecuted_blocks=1 00:08:53.355 00:08:53.355 ' 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.355 --rc genhtml_branch_coverage=1 00:08:53.355 --rc genhtml_function_coverage=1 00:08:53.355 --rc genhtml_legend=1 00:08:53.355 --rc geninfo_all_blocks=1 00:08:53.355 --rc geninfo_unexecuted_blocks=1 00:08:53.355 00:08:53.355 ' 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.355 --rc genhtml_branch_coverage=1 00:08:53.355 --rc genhtml_function_coverage=1 00:08:53.355 --rc genhtml_legend=1 00:08:53.355 --rc geninfo_all_blocks=1 00:08:53.355 --rc geninfo_unexecuted_blocks=1 00:08:53.355 00:08:53.355 ' 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.355 --rc genhtml_branch_coverage=1 00:08:53.355 --rc genhtml_function_coverage=1 00:08:53.355 --rc genhtml_legend=1 00:08:53.355 --rc geninfo_all_blocks=1 00:08:53.355 --rc geninfo_unexecuted_blocks=1 00:08:53.355 00:08:53.355 ' 00:08:53.355 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.355 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.355 10:37:19 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:53.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.356 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:53.356 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65618 00:08:53.356 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:53.356 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:53.356 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65618 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65618 ']' 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.356 10:37:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.356 [2024-11-18 10:37:19.138382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.356 [2024-11-18 10:37:19.138499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65618 ] 00:08:53.613 [2024-11-18 10:37:19.291379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.613 [2024-11-18 10:37:19.385312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.613 [2024-11-18 10:37:19.385470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.178 10:37:19 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.178 10:37:19 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:54.178 10:37:19 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:54.436 Nvme0n1 00:08:54.436 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:54.436 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:54.694 request: 00:08:54.694 { 00:08:54.694 "bdev_name": "Nvme0n1", 00:08:54.695 "filename": "non_existing_file", 00:08:54.695 "method": "bdev_nvme_apply_firmware", 00:08:54.695 "req_id": 1 00:08:54.695 } 00:08:54.695 Got JSON-RPC error response 00:08:54.695 response: 00:08:54.695 { 00:08:54.695 "code": -32603, 00:08:54.695 "message": "open file failed." 00:08:54.695 } 00:08:54.695 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:54.695 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:54.695 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:54.952 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:54.952 10:37:20 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65618 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65618 ']' 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65618 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65618 00:08:54.952 killing process with pid 65618 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65618' 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65618 00:08:54.952 10:37:20 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65618 00:08:56.388 ************************************ 00:08:56.388 END TEST nvme_rpc 00:08:56.388 ************************************ 00:08:56.388 00:08:56.388 real 0m3.232s 00:08:56.388 user 0m6.192s 00:08:56.388 sys 0m0.453s 00:08:56.388 10:37:22 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.388 10:37:22 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.388 10:37:22 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:56.388 10:37:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.388 10:37:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.388 10:37:22 -- common/autotest_common.sh@10 -- # set +x 00:08:56.388 ************************************ 00:08:56.388 START TEST nvme_rpc_timeouts 00:08:56.388 ************************************ 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:56.388 * Looking for test storage... 00:08:56.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:56.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.388 10:37:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.388 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.388 --rc genhtml_branch_coverage=1 00:08:56.388 --rc genhtml_function_coverage=1 00:08:56.388 --rc genhtml_legend=1 00:08:56.388 --rc geninfo_all_blocks=1 00:08:56.388 --rc geninfo_unexecuted_blocks=1 00:08:56.388 00:08:56.388 ' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.389 --rc genhtml_branch_coverage=1 00:08:56.389 --rc genhtml_function_coverage=1 00:08:56.389 --rc genhtml_legend=1 00:08:56.389 --rc geninfo_all_blocks=1 00:08:56.389 --rc geninfo_unexecuted_blocks=1 00:08:56.389 00:08:56.389 ' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.389 --rc genhtml_branch_coverage=1 00:08:56.389 --rc genhtml_function_coverage=1 00:08:56.389 --rc genhtml_legend=1 00:08:56.389 --rc geninfo_all_blocks=1 00:08:56.389 --rc geninfo_unexecuted_blocks=1 00:08:56.389 00:08:56.389 ' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.389 --rc genhtml_branch_coverage=1 00:08:56.389 --rc genhtml_function_coverage=1 00:08:56.389 --rc genhtml_legend=1 00:08:56.389 --rc geninfo_all_blocks=1 00:08:56.389 --rc geninfo_unexecuted_blocks=1 00:08:56.389 00:08:56.389 ' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65683 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65683 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65715 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65715 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65715 ']' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.389 10:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:56.389 10:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:56.647 [2024-11-18 10:37:22.345035] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:56.647 [2024-11-18 10:37:22.345153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65715 ] 00:08:56.647 [2024-11-18 10:37:22.502909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:56.906 [2024-11-18 10:37:22.601585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.906 [2024-11-18 10:37:22.601663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.473 Checking default timeout settings: 00:08:57.473 10:37:23 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.473 10:37:23 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:08:57.473 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:08:57.473 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:57.731 Making settings changes with rpc: 00:08:57.731 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:08:57.731 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:08:57.989 Check default vs. modified settings: 00:08:57.989 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:08:57.989 10:37:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 Setting action_on_timeout is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 Setting timeout_us is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:58.248 Setting timeout_admin_us is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65683 /tmp/settings_modified_65683 00:08:58.248 10:37:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65715 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65715 ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65715 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65715 00:08:58.248 killing process with pid 65715 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65715' 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65715 00:08:58.248 10:37:24 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65715 00:08:59.624 RPC TIMEOUT SETTING TEST PASSED. 00:08:59.624 10:37:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:08:59.624 00:08:59.624 real 0m3.188s 00:08:59.624 user 0m6.236s 00:08:59.624 sys 0m0.483s 00:08:59.624 ************************************ 00:08:59.624 END TEST nvme_rpc_timeouts 00:08:59.624 ************************************ 00:08:59.624 10:37:25 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.624 10:37:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:59.624 10:37:25 -- spdk/autotest.sh@239 -- # uname -s 00:08:59.624 10:37:25 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:08:59.624 10:37:25 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:59.624 10:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.624 10:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.624 10:37:25 -- common/autotest_common.sh@10 -- # set +x 00:08:59.624 ************************************ 00:08:59.624 START TEST sw_hotplug 00:08:59.624 ************************************ 00:08:59.624 10:37:25 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:59.624 * Looking for test storage... 00:08:59.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.624 10:37:25 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.624 10:37:25 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.624 10:37:25 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.624 10:37:25 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.624 10:37:25 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.882 10:37:25 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:08:59.882 10:37:25 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.882 10:37:25 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.882 --rc genhtml_branch_coverage=1 00:08:59.882 --rc genhtml_function_coverage=1 00:08:59.882 --rc genhtml_legend=1 00:08:59.882 --rc geninfo_all_blocks=1 00:08:59.882 --rc geninfo_unexecuted_blocks=1 00:08:59.882 00:08:59.882 ' 00:08:59.882 10:37:25 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.882 --rc genhtml_branch_coverage=1 00:08:59.882 --rc genhtml_function_coverage=1 00:08:59.882 --rc genhtml_legend=1 00:08:59.882 --rc geninfo_all_blocks=1 00:08:59.882 --rc geninfo_unexecuted_blocks=1 00:08:59.882 00:08:59.882 ' 00:08:59.882 10:37:25 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.882 --rc genhtml_branch_coverage=1 00:08:59.882 --rc genhtml_function_coverage=1 00:08:59.882 --rc genhtml_legend=1 00:08:59.882 --rc geninfo_all_blocks=1 00:08:59.882 --rc geninfo_unexecuted_blocks=1 00:08:59.882 00:08:59.882 ' 00:08:59.882 10:37:25 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.882 --rc genhtml_branch_coverage=1 00:08:59.882 --rc genhtml_function_coverage=1 00:08:59.882 --rc genhtml_legend=1 00:08:59.882 --rc geninfo_all_blocks=1 00:08:59.882 --rc geninfo_unexecuted_blocks=1 00:08:59.882 00:08:59.882 ' 00:08:59.882 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.141 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.141 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.141 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.141 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.141 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:00.141 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:00.141 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:00.141 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:00.141 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:00.142 10:37:25 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.142 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:00.142 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:00.142 10:37:25 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:00.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.658 Waiting for block devices as requested 00:09:00.658 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.658 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.916 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.916 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:06.188 10:37:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:06.189 10:37:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:06.189 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:06.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.447 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:06.447 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:06.705 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.705 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.705 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:06.705 10:37:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66568 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:06.963 10:37:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:06.963 10:37:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:06.963 10:37:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:06.963 10:37:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:06.963 10:37:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:06.963 10:37:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:07.220 Initializing NVMe Controllers 00:09:07.220 Attaching to 0000:00:10.0 00:09:07.220 Attaching to 0000:00:11.0 00:09:07.220 Attached to 0000:00:11.0 00:09:07.220 Attached to 0000:00:10.0 00:09:07.220 Initialization complete. Starting I/O... 00:09:07.220 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:07.220 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:07.220 00:09:08.154 QEMU NVMe Ctrl (12341 ): 2436 I/Os completed (+2436) 00:09:08.154 QEMU NVMe Ctrl (12340 ): 2443 I/Os completed (+2443) 00:09:08.154 00:09:09.093 QEMU NVMe Ctrl (12341 ): 5618 I/Os completed (+3182) 00:09:09.093 QEMU NVMe Ctrl (12340 ): 5627 I/Os completed (+3184) 00:09:09.093 00:09:10.042 QEMU NVMe Ctrl (12341 ): 8753 I/Os completed (+3135) 00:09:10.042 QEMU NVMe Ctrl (12340 ): 8741 I/Os completed (+3114) 00:09:10.042 00:09:10.975 QEMU NVMe Ctrl (12341 ): 12042 I/Os completed (+3289) 00:09:10.975 QEMU NVMe Ctrl (12340 ): 12021 I/Os completed (+3280) 00:09:10.975 00:09:12.347 QEMU NVMe Ctrl (12341 ): 15314 I/Os completed (+3272) 00:09:12.347 QEMU NVMe Ctrl (12340 ): 15315 I/Os completed (+3294) 00:09:12.347 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:12.913 [2024-11-18 10:37:38.670234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:12.913 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:12.913 [2024-11-18 10:37:38.671288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.671329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.671344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.671359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:12.913 [2024-11-18 10:37:38.672775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.672811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.672823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.672834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:12.913 [2024-11-18 10:37:38.692174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:12.913 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:12.913 [2024-11-18 10:37:38.693146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.693250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.693271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.693286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:12.913 [2024-11-18 10:37:38.694623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.694655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.694667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 [2024-11-18 10:37:38.694678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:12.913 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:12.913 EAL: Scan for (pci) bus failed. 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:12.913 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:13.171 Attaching to 0000:00:10.0 00:09:13.171 Attached to 0000:00:10.0 00:09:13.171 QEMU NVMe Ctrl (12340 ): 28 I/Os completed (+28) 00:09:13.171 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:13.171 10:37:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:13.171 Attaching to 0000:00:11.0 00:09:13.171 Attached to 0000:00:11.0 00:09:14.109 QEMU NVMe Ctrl (12340 ): 3731 I/Os completed (+3703) 00:09:14.109 QEMU NVMe Ctrl (12341 ): 3377 I/Os completed (+3377) 00:09:14.109 00:09:15.044 QEMU NVMe Ctrl (12340 ): 7133 I/Os completed (+3402) 00:09:15.044 QEMU NVMe Ctrl (12341 ): 6779 I/Os completed (+3402) 00:09:15.044 00:09:15.979 QEMU NVMe Ctrl (12340 ): 10341 I/Os completed (+3208) 00:09:15.979 QEMU NVMe Ctrl (12341 ): 9939 I/Os completed (+3160) 00:09:15.979 00:09:17.354 QEMU NVMe Ctrl (12340 ): 13577 I/Os completed (+3236) 00:09:17.354 QEMU NVMe Ctrl (12341 ): 13175 I/Os completed (+3236) 00:09:17.354 00:09:18.289 QEMU NVMe Ctrl (12340 ): 16699 I/Os completed (+3122) 00:09:18.289 QEMU NVMe Ctrl (12341 ): 16297 I/Os completed (+3122) 00:09:18.289 00:09:19.224 QEMU NVMe Ctrl (12340 ): 19851 I/Os completed (+3152) 00:09:19.224 QEMU NVMe Ctrl (12341 ): 19453 I/Os completed (+3156) 00:09:19.224 00:09:20.158 QEMU NVMe Ctrl (12340 ): 23301 I/Os completed (+3450) 00:09:20.158 QEMU NVMe Ctrl (12341 ): 22900 I/Os completed (+3447) 00:09:20.158 00:09:21.163 QEMU NVMe Ctrl (12340 ): 26970 I/Os completed (+3669) 00:09:21.163 QEMU NVMe Ctrl (12341 ): 26556 I/Os completed (+3656) 00:09:21.163 00:09:22.121 QEMU NVMe Ctrl (12340 ): 30638 I/Os completed (+3668) 00:09:22.121 QEMU NVMe Ctrl (12341 ): 30175 I/Os completed (+3619) 00:09:22.121 00:09:23.055 QEMU NVMe Ctrl (12340 ): 33942 I/Os completed (+3304) 00:09:23.055 QEMU NVMe Ctrl (12341 ): 33505 I/Os completed (+3330) 00:09:23.055 00:09:23.987 QEMU NVMe Ctrl (12340 ): 37042 I/Os completed (+3100) 00:09:23.987 QEMU NVMe Ctrl (12341 ): 36642 I/Os completed (+3137) 00:09:23.987 00:09:25.359 QEMU NVMe Ctrl (12340 ): 40243 I/Os completed (+3201) 00:09:25.359 QEMU NVMe Ctrl (12341 ): 39834 I/Os completed (+3192) 00:09:25.359 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:25.359 [2024-11-18 10:37:50.943171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:25.359 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:25.359 [2024-11-18 10:37:50.944527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.944924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.945007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.945045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:25.359 [2024-11-18 10:37:50.947527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.947615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.947649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.948149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:25.359 [2024-11-18 10:37:50.967427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:25.359 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:25.359 [2024-11-18 10:37:50.968803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.968921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.968978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.969022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:25.359 [2024-11-18 10:37:50.970845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.970941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.970976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 [2024-11-18 10:37:50.971035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:25.359 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:25.359 EAL: Scan for (pci) bus failed. 00:09:25.359 10:37:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:25.359 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:25.359 Attaching to 0000:00:10.0 00:09:25.359 Attached to 0000:00:10.0 00:09:25.617 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:25.617 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:25.617 10:37:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:25.617 Attaching to 0000:00:11.0 00:09:25.617 Attached to 0000:00:11.0 00:09:26.183 QEMU NVMe Ctrl (12340 ): 2297 I/Os completed (+2297) 00:09:26.183 QEMU NVMe Ctrl (12341 ): 2011 I/Os completed (+2011) 00:09:26.183 00:09:27.124 QEMU NVMe Ctrl (12340 ): 5946 I/Os completed (+3649) 00:09:27.124 QEMU NVMe Ctrl (12341 ): 5666 I/Os completed (+3655) 00:09:27.124 00:09:28.064 QEMU NVMe Ctrl (12340 ): 9654 I/Os completed (+3708) 00:09:28.064 QEMU NVMe Ctrl (12341 ): 9399 I/Os completed (+3733) 00:09:28.064 00:09:29.005 QEMU NVMe Ctrl (12340 ): 13281 I/Os completed (+3627) 00:09:29.005 QEMU NVMe Ctrl (12341 ): 13026 I/Os completed (+3627) 00:09:29.005 00:09:30.387 QEMU NVMe Ctrl (12340 ): 16992 I/Os completed (+3711) 00:09:30.387 QEMU NVMe Ctrl (12341 ): 16754 I/Os completed (+3728) 00:09:30.387 00:09:31.329 QEMU NVMe Ctrl (12340 ): 20629 I/Os completed (+3637) 00:09:31.329 QEMU NVMe Ctrl (12341 ): 20401 I/Os completed (+3647) 00:09:31.329 00:09:32.274 QEMU NVMe Ctrl (12340 ): 23896 I/Os completed (+3267) 00:09:32.274 QEMU NVMe Ctrl (12341 ): 23790 I/Os completed (+3389) 00:09:32.274 00:09:33.283 QEMU NVMe Ctrl (12340 ): 26955 I/Os completed (+3059) 00:09:33.283 QEMU NVMe Ctrl (12341 ): 26823 I/Os completed (+3033) 00:09:33.283 00:09:34.228 QEMU NVMe Ctrl (12340 ): 30088 I/Os completed (+3133) 00:09:34.228 QEMU NVMe Ctrl (12341 ): 29954 I/Os completed (+3131) 00:09:34.228 00:09:35.170 QEMU NVMe Ctrl (12340 ): 33690 I/Os completed (+3602) 00:09:35.170 QEMU NVMe Ctrl (12341 ): 33550 I/Os completed (+3596) 00:09:35.170 00:09:36.114 QEMU NVMe Ctrl (12340 ): 37449 I/Os completed (+3759) 00:09:36.114 QEMU NVMe Ctrl (12341 ): 37310 I/Os completed (+3760) 00:09:36.114 00:09:37.058 QEMU NVMe Ctrl (12340 ): 41052 I/Os completed (+3603) 00:09:37.058 QEMU NVMe Ctrl (12341 ): 40914 I/Os completed (+3604) 00:09:37.058 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:37.628 [2024-11-18 10:38:03.267454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:37.628 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:37.628 [2024-11-18 10:38:03.268510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.268631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.268661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.268725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:37.628 [2024-11-18 10:38:03.270363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.270403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.270415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.270426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:37.628 [2024-11-18 10:38:03.287963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:37.628 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:37.628 [2024-11-18 10:38:03.288835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.288951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.288971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.288985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:37.628 [2024-11-18 10:38:03.290426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.290506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.290567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 [2024-11-18 10:38:03.290591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:37.628 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:37.628 Attaching to 0000:00:10.0 00:09:37.628 Attached to 0000:00:10.0 00:09:37.886 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:37.886 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:37.886 10:38:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:37.886 Attaching to 0000:00:11.0 00:09:37.886 Attached to 0000:00:11.0 00:09:37.886 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:37.886 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:37.886 [2024-11-18 10:38:03.549228] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:50.109 10:38:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:50.109 10:38:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:50.109 10:38:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.87 00:09:50.109 10:38:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.87 00:09:50.109 10:38:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:09:50.109 10:38:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:09:50.109 10:38:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:09:50.109 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 10:38:15 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66568 00:09:56.691 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66568) - No such process 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66568 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67117 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67117 00:09:56.691 10:38:21 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67117 ']' 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.691 10:38:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 [2024-11-18 10:38:21.628417] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:56.691 [2024-11-18 10:38:21.628535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67117 ] 00:09:56.691 [2024-11-18 10:38:21.787772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.691 [2024-11-18 10:38:21.907083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:56.949 10:38:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:56.949 10:38:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:03.506 10:38:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.506 10:38:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.506 10:38:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:03.506 10:38:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:03.506 [2024-11-18 10:38:28.694047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:03.506 [2024-11-18 10:38:28.695284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.506 [2024-11-18 10:38:28.695318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.506 [2024-11-18 10:38:28.695331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.506 [2024-11-18 10:38:28.695348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:28.695356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:28.695364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 [2024-11-18 10:38:28.695371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:28.695378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:28.695385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 [2024-11-18 10:38:28.695396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:28.695403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:28.695410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:03.507 10:38:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.507 10:38:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.507 [2024-11-18 10:38:29.194042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:03.507 [2024-11-18 10:38:29.195273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:29.195301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:29.195313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 [2024-11-18 10:38:29.195328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:29.195336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:29.195343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 [2024-11-18 10:38:29.195351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:29.195358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:29.195365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 [2024-11-18 10:38:29.195372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.507 [2024-11-18 10:38:29.195380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.507 [2024-11-18 10:38:29.195386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.507 10:38:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:03.507 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:04.074 10:38:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.074 10:38:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:04.074 10:38:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.074 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.332 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.332 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.332 10:38:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:16.540 10:38:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:16.540 10:38:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 10:38:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 10:38:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:16.540 10:38:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 10:38:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 10:38:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:16.540 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:16.540 [2024-11-18 10:38:42.094262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:16.540 [2024-11-18 10:38:42.095429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.540 [2024-11-18 10:38:42.095541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.540 [2024-11-18 10:38:42.095555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.540 [2024-11-18 10:38:42.095572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.540 [2024-11-18 10:38:42.095579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.540 [2024-11-18 10:38:42.095587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.540 [2024-11-18 10:38:42.095594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.540 [2024-11-18 10:38:42.095602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.541 [2024-11-18 10:38:42.095608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.541 [2024-11-18 10:38:42.095617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.541 [2024-11-18 10:38:42.095623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.541 [2024-11-18 10:38:42.095630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.802 [2024-11-18 10:38:42.594262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:16.802 [2024-11-18 10:38:42.595471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.802 [2024-11-18 10:38:42.595498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.802 [2024-11-18 10:38:42.595510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.802 [2024-11-18 10:38:42.595522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.802 [2024-11-18 10:38:42.595530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.802 [2024-11-18 10:38:42.595537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.802 [2024-11-18 10:38:42.595546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.802 [2024-11-18 10:38:42.595552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.802 [2024-11-18 10:38:42.595560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.802 [2024-11-18 10:38:42.595567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.802 [2024-11-18 10:38:42.595574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:16.802 [2024-11-18 10:38:42.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:16.802 10:38:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.802 10:38:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:16.802 10:38:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:16.802 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.062 10:38:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:29.268 10:38:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:29.268 10:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:29.268 [2024-11-18 10:38:54.994491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:29.268 [2024-11-18 10:38:54.995812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.268 [2024-11-18 10:38:54.995848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.268 [2024-11-18 10:38:54.995859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.268 [2024-11-18 10:38:54.995874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.268 [2024-11-18 10:38:54.995881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.268 [2024-11-18 10:38:54.995892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.268 [2024-11-18 10:38:54.995899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.268 [2024-11-18 10:38:54.995907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.268 [2024-11-18 10:38:54.995914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.268 [2024-11-18 10:38:54.995921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.268 [2024-11-18 10:38:54.995928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.268 [2024-11-18 10:38:54.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:29.841 10:38:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.841 10:38:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:29.841 [2024-11-18 10:38:55.494495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:29.841 [2024-11-18 10:38:55.495680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.841 [2024-11-18 10:38:55.495714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.841 [2024-11-18 10:38:55.495727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.841 [2024-11-18 10:38:55.495742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.841 [2024-11-18 10:38:55.495751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.841 [2024-11-18 10:38:55.495758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.841 [2024-11-18 10:38:55.495766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.841 [2024-11-18 10:38:55.495773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.841 [2024-11-18 10:38:55.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.841 [2024-11-18 10:38:55.495790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.841 [2024-11-18 10:38:55.495798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:29.841 [2024-11-18 10:38:55.495808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:29.841 10:38:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:29.841 10:38:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:30.414 10:38:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.414 10:38:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.414 10:38:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:30.414 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:30.676 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.676 10:38:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:10:42.879 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:42.879 10:39:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:42.879 10:39:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.437 10:39:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.437 10:39:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.437 10:39:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.437 [2024-11-18 10:39:14.462017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.437 [2024-11-18 10:39:14.463096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.463222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.463237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.463256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.463264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.463273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.463280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.463288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.463295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.463303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.463309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.862020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.437 [2024-11-18 10:39:14.864245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.864274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.864284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.864297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.864306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.864313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.864322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.864328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.864336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 [2024-11-18 10:39:14.864343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.437 [2024-11-18 10:39:14.864351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.437 [2024-11-18 10:39:14.864357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.437 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.438 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.438 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.438 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.438 10:39:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.438 10:39:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.438 10:39:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.438 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:49.438 10:39:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.438 10:39:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.658 [2024-11-18 10:39:27.262275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:01.658 [2024-11-18 10:39:27.263376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.658 [2024-11-18 10:39:27.263486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.658 [2024-11-18 10:39:27.263549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.658 [2024-11-18 10:39:27.263615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.658 [2024-11-18 10:39:27.263634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.658 [2024-11-18 10:39:27.263683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.658 [2024-11-18 10:39:27.263709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.658 [2024-11-18 10:39:27.263786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.658 [2024-11-18 10:39:27.263811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.658 [2024-11-18 10:39:27.263836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.658 [2024-11-18 10:39:27.263879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.658 [2024-11-18 10:39:27.263909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.658 10:39:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:01.658 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:01.917 [2024-11-18 10:39:27.762276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:01.917 [2024-11-18 10:39:27.763355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.917 [2024-11-18 10:39:27.763459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.917 [2024-11-18 10:39:27.763474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.917 [2024-11-18 10:39:27.763486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.917 [2024-11-18 10:39:27.763497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.917 [2024-11-18 10:39:27.763504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.917 [2024-11-18 10:39:27.763514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.917 [2024-11-18 10:39:27.763520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.917 [2024-11-18 10:39:27.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.917 [2024-11-18 10:39:27.763535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.917 [2024-11-18 10:39:27.763543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.917 [2024-11-18 10:39:27.763549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.175 10:39:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.175 10:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.175 10:39:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.175 10:39:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.175 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.175 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.175 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.175 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.175 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.432 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.432 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.432 10:39:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.639 [2024-11-18 10:39:40.162499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.639 [2024-11-18 10:39:40.163565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.639 [2024-11-18 10:39:40.163595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.639 [2024-11-18 10:39:40.163605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.639 [2024-11-18 10:39:40.163622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.639 [2024-11-18 10:39:40.163629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.639 [2024-11-18 10:39:40.163640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.639 [2024-11-18 10:39:40.163647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.639 [2024-11-18 10:39:40.163657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.639 [2024-11-18 10:39:40.163664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.639 [2024-11-18 10:39:40.163673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.639 [2024-11-18 10:39:40.163679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.639 [2024-11-18 10:39:40.163687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.639 10:39:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:14.639 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.898 [2024-11-18 10:39:40.562496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:14.898 [2024-11-18 10:39:40.563401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.898 [2024-11-18 10:39:40.563429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.898 [2024-11-18 10:39:40.563441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.898 [2024-11-18 10:39:40.563452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.898 [2024-11-18 10:39:40.563462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.898 [2024-11-18 10:39:40.563469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.898 [2024-11-18 10:39:40.563477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.898 [2024-11-18 10:39:40.563484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.898 [2024-11-18 10:39:40.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.898 [2024-11-18 10:39:40.563499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.898 [2024-11-18 10:39:40.563509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.898 [2024-11-18 10:39:40.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.898 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.898 10:39:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.898 10:39:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.898 10:39:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.899 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.899 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.158 10:39:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.158 10:39:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.158 10:39:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.67 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.67 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:11:27.415 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:27.415 10:39:53 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67117 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67117 ']' 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67117 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67117 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.415 killing process with pid 67117 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67117' 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67117 00:11:27.415 10:39:53 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67117 00:11:28.793 10:39:54 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:28.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.365 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:29.365 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:29.365 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.365 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.365 00:11:29.365 real 2m29.826s 00:11:29.365 user 1m51.568s 00:11:29.365 sys 0m16.987s 00:11:29.365 ************************************ 00:11:29.365 END TEST sw_hotplug 00:11:29.365 10:39:55 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.365 10:39:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 ************************************ 00:11:29.628 10:39:55 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:29.628 10:39:55 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:29.628 10:39:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.628 10:39:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.628 10:39:55 -- common/autotest_common.sh@10 -- # set +x 00:11:29.628 ************************************ 00:11:29.628 START TEST nvme_xnvme 00:11:29.628 ************************************ 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:29.628 * Looking for test storage... 00:11:29.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.628 --rc genhtml_branch_coverage=1 00:11:29.628 --rc genhtml_function_coverage=1 00:11:29.628 --rc genhtml_legend=1 00:11:29.628 --rc geninfo_all_blocks=1 00:11:29.628 --rc geninfo_unexecuted_blocks=1 00:11:29.628 00:11:29.628 ' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.628 --rc genhtml_branch_coverage=1 00:11:29.628 --rc genhtml_function_coverage=1 00:11:29.628 --rc genhtml_legend=1 00:11:29.628 --rc geninfo_all_blocks=1 00:11:29.628 --rc geninfo_unexecuted_blocks=1 00:11:29.628 00:11:29.628 ' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.628 --rc genhtml_branch_coverage=1 00:11:29.628 --rc genhtml_function_coverage=1 00:11:29.628 --rc genhtml_legend=1 00:11:29.628 --rc geninfo_all_blocks=1 00:11:29.628 --rc geninfo_unexecuted_blocks=1 00:11:29.628 00:11:29.628 ' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.628 --rc genhtml_branch_coverage=1 00:11:29.628 --rc genhtml_function_coverage=1 00:11:29.628 --rc genhtml_legend=1 00:11:29.628 --rc geninfo_all_blocks=1 00:11:29.628 --rc geninfo_unexecuted_blocks=1 00:11:29.628 00:11:29.628 ' 00:11:29.628 10:39:55 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.628 10:39:55 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.628 10:39:55 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.628 10:39:55 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.628 10:39:55 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.628 10:39:55 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:29.628 10:39:55 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.628 10:39:55 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.628 10:39:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.628 ************************************ 00:11:29.628 START TEST xnvme_to_malloc_dd_copy 00:11:29.628 ************************************ 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:29.628 10:39:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:29.889 { 00:11:29.889 "subsystems": [ 00:11:29.889 { 00:11:29.889 "subsystem": "bdev", 00:11:29.889 "config": [ 00:11:29.889 { 00:11:29.889 "params": { 00:11:29.889 "block_size": 512, 00:11:29.889 "num_blocks": 2097152, 00:11:29.889 "name": "malloc0" 00:11:29.889 }, 00:11:29.889 "method": "bdev_malloc_create" 00:11:29.889 }, 00:11:29.889 { 00:11:29.889 "params": { 00:11:29.889 "io_mechanism": "libaio", 00:11:29.889 "filename": "/dev/nullb0", 00:11:29.889 "name": "null0" 00:11:29.889 }, 00:11:29.889 "method": "bdev_xnvme_create" 00:11:29.889 }, 00:11:29.889 { 00:11:29.889 "method": "bdev_wait_for_examine" 00:11:29.889 } 00:11:29.889 ] 00:11:29.889 } 00:11:29.889 ] 00:11:29.889 } 00:11:29.889 [2024-11-18 10:39:55.543244] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:29.889 [2024-11-18 10:39:55.543390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68503 ] 00:11:29.889 [2024-11-18 10:39:55.705891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.150 [2024-11-18 10:39:55.827504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.065  [2024-11-18T10:39:59.335Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-18T10:40:00.277Z] Copying: 452/1024 [MB] (226 MBps) [2024-11-18T10:40:01.218Z] Copying: 712/1024 [MB] (259 MBps) [2024-11-18T10:40:01.218Z] Copying: 1012/1024 [MB] (300 MBps) [2024-11-18T10:40:03.175Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:11:37.291 00:11:37.291 10:40:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:37.291 10:40:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:37.291 10:40:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:37.291 10:40:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.291 { 00:11:37.291 "subsystems": [ 00:11:37.291 { 00:11:37.291 "subsystem": "bdev", 00:11:37.291 "config": [ 00:11:37.291 { 00:11:37.291 "params": { 00:11:37.291 "block_size": 512, 00:11:37.291 "num_blocks": 2097152, 00:11:37.291 "name": "malloc0" 00:11:37.291 }, 00:11:37.291 "method": "bdev_malloc_create" 00:11:37.291 }, 00:11:37.291 { 00:11:37.291 "params": { 00:11:37.291 "io_mechanism": "libaio", 00:11:37.291 "filename": "/dev/nullb0", 00:11:37.291 "name": "null0" 00:11:37.291 }, 00:11:37.291 "method": "bdev_xnvme_create" 00:11:37.291 }, 00:11:37.291 { 00:11:37.291 "method": "bdev_wait_for_examine" 00:11:37.291 } 00:11:37.291 ] 00:11:37.291 } 00:11:37.291 ] 00:11:37.291 } 00:11:37.291 [2024-11-18 10:40:02.974605] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:37.291 [2024-11-18 10:40:02.974727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68591 ] 00:11:37.291 [2024-11-18 10:40:03.137978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.552 [2024-11-18 10:40:03.255974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.465  [2024-11-18T10:40:06.735Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-18T10:40:07.674Z] Copying: 517/1024 [MB] (288 MBps) [2024-11-18T10:40:08.246Z] Copying: 822/1024 [MB] (305 MBps) [2024-11-18T10:40:10.158Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:11:44.274 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:44.274 10:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 { 00:11:44.274 "subsystems": [ 00:11:44.274 { 00:11:44.274 "subsystem": "bdev", 00:11:44.274 "config": [ 00:11:44.274 { 00:11:44.274 "params": { 00:11:44.274 "block_size": 512, 00:11:44.274 "num_blocks": 2097152, 00:11:44.274 "name": "malloc0" 00:11:44.274 }, 00:11:44.274 "method": "bdev_malloc_create" 00:11:44.274 }, 00:11:44.274 { 00:11:44.274 "params": { 00:11:44.274 "io_mechanism": "io_uring", 00:11:44.274 "filename": "/dev/nullb0", 00:11:44.274 "name": "null0" 00:11:44.274 }, 00:11:44.274 "method": "bdev_xnvme_create" 00:11:44.274 }, 00:11:44.274 { 00:11:44.274 "method": "bdev_wait_for_examine" 00:11:44.274 } 00:11:44.274 ] 00:11:44.274 } 00:11:44.274 ] 00:11:44.274 } 00:11:44.274 [2024-11-18 10:40:09.979398] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:44.274 [2024-11-18 10:40:09.979484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68679 ] 00:11:44.274 [2024-11-18 10:40:10.134431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.535 [2024-11-18 10:40:10.243090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.450  [2024-11-18T10:40:13.720Z] Copying: 261/1024 [MB] (261 MBps) [2024-11-18T10:40:14.663Z] Copying: 572/1024 [MB] (311 MBps) [2024-11-18T10:40:14.923Z] Copying: 883/1024 [MB] (311 MBps) [2024-11-18T10:40:16.905Z] Copying: 1024/1024 [MB] (average 296 MBps) 00:11:51.021 00:11:51.021 10:40:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:51.021 10:40:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:51.021 10:40:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:51.021 10:40:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:51.021 { 00:11:51.021 "subsystems": [ 00:11:51.021 { 00:11:51.021 "subsystem": "bdev", 00:11:51.021 "config": [ 00:11:51.021 { 00:11:51.021 "params": { 00:11:51.021 "block_size": 512, 00:11:51.021 "num_blocks": 2097152, 00:11:51.021 "name": "malloc0" 00:11:51.021 }, 00:11:51.021 "method": "bdev_malloc_create" 00:11:51.021 }, 00:11:51.021 { 00:11:51.021 "params": { 00:11:51.021 "io_mechanism": "io_uring", 00:11:51.021 "filename": "/dev/nullb0", 00:11:51.021 "name": "null0" 00:11:51.021 }, 00:11:51.021 "method": "bdev_xnvme_create" 00:11:51.021 }, 00:11:51.021 { 00:11:51.021 "method": "bdev_wait_for_examine" 00:11:51.021 } 00:11:51.021 ] 00:11:51.021 } 00:11:51.021 ] 00:11:51.021 } 00:11:51.022 [2024-11-18 10:40:16.685225] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:51.022 [2024-11-18 10:40:16.685348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68761 ] 00:11:51.022 [2024-11-18 10:40:16.842285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.282 [2024-11-18 10:40:16.924844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.195  [2024-11-18T10:40:20.022Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-18T10:40:20.964Z] Copying: 634/1024 [MB] (316 MBps) [2024-11-18T10:40:20.964Z] Copying: 950/1024 [MB] (316 MBps) [2024-11-18T10:40:22.879Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:11:56.995 00:11:56.995 10:40:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:56.995 10:40:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:56.995 00:11:56.995 real 0m27.364s 00:11:56.995 user 0m23.767s 00:11:56.995 sys 0m3.082s 00:11:56.995 10:40:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.995 10:40:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:56.995 ************************************ 00:11:56.995 END TEST xnvme_to_malloc_dd_copy 00:11:56.995 ************************************ 00:11:56.995 10:40:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:56.995 10:40:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.995 10:40:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.995 10:40:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.995 ************************************ 00:11:56.995 START TEST xnvme_bdevperf 00:11:56.995 ************************************ 00:11:56.995 10:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:56.995 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:56.995 10:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:56.995 10:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:57.256 10:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:57.256 { 00:11:57.256 "subsystems": [ 00:11:57.256 { 00:11:57.256 "subsystem": "bdev", 00:11:57.256 "config": [ 00:11:57.256 { 00:11:57.256 "params": { 00:11:57.256 "io_mechanism": "libaio", 00:11:57.256 "filename": "/dev/nullb0", 00:11:57.256 "name": "null0" 00:11:57.256 }, 00:11:57.256 "method": "bdev_xnvme_create" 00:11:57.256 }, 00:11:57.256 { 00:11:57.256 "method": "bdev_wait_for_examine" 00:11:57.256 } 00:11:57.256 ] 00:11:57.256 } 00:11:57.256 ] 00:11:57.256 } 00:11:57.256 [2024-11-18 10:40:22.949424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:57.256 [2024-11-18 10:40:22.949534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68860 ] 00:11:57.256 [2024-11-18 10:40:23.106406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.517 [2024-11-18 10:40:23.181745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.517 Running I/O for 5 seconds... 00:11:59.844 201344.00 IOPS, 786.50 MiB/s [2024-11-18T10:40:26.670Z] 200480.00 IOPS, 783.12 MiB/s [2024-11-18T10:40:27.609Z] 200853.33 IOPS, 784.58 MiB/s [2024-11-18T10:40:28.551Z] 201120.00 IOPS, 785.62 MiB/s 00:12:02.667 Latency(us) 00:12:02.667 [2024-11-18T10:40:28.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.667 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:02.667 null0 : 5.00 201210.45 785.98 0.00 0.00 315.91 107.91 1569.08 00:12:02.667 [2024-11-18T10:40:28.551Z] =================================================================================================================== 00:12:02.667 [2024-11-18T10:40:28.551Z] Total : 201210.45 785.98 0.00 0.00 315.91 107.91 1569.08 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:03.284 10:40:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:03.284 { 00:12:03.284 "subsystems": [ 00:12:03.284 { 00:12:03.284 "subsystem": "bdev", 00:12:03.284 "config": [ 00:12:03.284 { 00:12:03.284 "params": { 00:12:03.284 "io_mechanism": "io_uring", 00:12:03.284 "filename": "/dev/nullb0", 00:12:03.284 "name": "null0" 00:12:03.284 }, 00:12:03.284 "method": "bdev_xnvme_create" 00:12:03.284 }, 00:12:03.284 { 00:12:03.284 "method": "bdev_wait_for_examine" 00:12:03.284 } 00:12:03.284 ] 00:12:03.284 } 00:12:03.284 ] 00:12:03.284 } 00:12:03.284 [2024-11-18 10:40:29.011630] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:03.284 [2024-11-18 10:40:29.011748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68932 ] 00:12:03.543 [2024-11-18 10:40:29.170049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.543 [2024-11-18 10:40:29.259451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.803 Running I/O for 5 seconds... 00:12:05.681 230016.00 IOPS, 898.50 MiB/s [2024-11-18T10:40:32.504Z] 230176.00 IOPS, 899.12 MiB/s [2024-11-18T10:40:33.890Z] 230250.67 IOPS, 899.42 MiB/s [2024-11-18T10:40:34.461Z] 230304.00 IOPS, 899.62 MiB/s 00:12:08.577 Latency(us) 00:12:08.577 [2024-11-18T10:40:34.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.577 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:08.577 null0 : 5.00 230314.21 899.66 0.00 0.00 275.65 146.51 1524.97 00:12:08.577 [2024-11-18T10:40:34.461Z] =================================================================================================================== 00:12:08.577 [2024-11-18T10:40:34.461Z] Total : 230314.21 899.66 0.00 0.00 275.65 146.51 1524.97 00:12:09.149 10:40:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:09.149 10:40:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:09.411 00:12:09.411 real 0m12.176s 00:12:09.411 user 0m9.829s 00:12:09.411 sys 0m2.107s 00:12:09.411 10:40:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.411 ************************************ 00:12:09.411 END TEST xnvme_bdevperf 00:12:09.411 10:40:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:09.411 ************************************ 00:12:09.411 ************************************ 00:12:09.411 END TEST nvme_xnvme 00:12:09.411 ************************************ 00:12:09.411 00:12:09.411 real 0m39.804s 00:12:09.411 user 0m33.696s 00:12:09.411 sys 0m5.321s 00:12:09.411 10:40:35 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.411 10:40:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.411 10:40:35 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:09.411 10:40:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.411 10:40:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.411 10:40:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.411 ************************************ 00:12:09.411 START TEST blockdev_xnvme 00:12:09.411 ************************************ 00:12:09.411 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:09.411 * Looking for test storage... 00:12:09.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:09.411 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.411 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.411 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.411 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.411 10:40:35 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.411 10:40:35 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.411 10:40:35 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.411 10:40:35 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.411 10:40:35 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.412 10:40:35 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.412 --rc genhtml_branch_coverage=1 00:12:09.412 --rc genhtml_function_coverage=1 00:12:09.412 --rc genhtml_legend=1 00:12:09.412 --rc geninfo_all_blocks=1 00:12:09.412 --rc geninfo_unexecuted_blocks=1 00:12:09.412 00:12:09.412 ' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.412 --rc genhtml_branch_coverage=1 00:12:09.412 --rc genhtml_function_coverage=1 00:12:09.412 --rc genhtml_legend=1 00:12:09.412 --rc geninfo_all_blocks=1 00:12:09.412 --rc geninfo_unexecuted_blocks=1 00:12:09.412 00:12:09.412 ' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.412 --rc genhtml_branch_coverage=1 00:12:09.412 --rc genhtml_function_coverage=1 00:12:09.412 --rc genhtml_legend=1 00:12:09.412 --rc geninfo_all_blocks=1 00:12:09.412 --rc geninfo_unexecuted_blocks=1 00:12:09.412 00:12:09.412 ' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.412 --rc genhtml_branch_coverage=1 00:12:09.412 --rc genhtml_function_coverage=1 00:12:09.412 --rc genhtml_legend=1 00:12:09.412 --rc geninfo_all_blocks=1 00:12:09.412 --rc geninfo_unexecuted_blocks=1 00:12:09.412 00:12:09.412 ' 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69075 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69075 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69075 ']' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.412 10:40:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.412 10:40:35 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:09.674 [2024-11-18 10:40:35.377355] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:09.674 [2024-11-18 10:40:35.377504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69075 ] 00:12:09.674 [2024-11-18 10:40:35.536667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.935 [2024-11-18 10:40:35.624479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.507 10:40:36 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.507 10:40:36 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:10.507 10:40:36 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:10.507 10:40:36 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:10.507 10:40:36 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:10.507 10:40:36 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:10.507 10:40:36 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:10.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.838 Waiting for block devices as requested 00:12:10.838 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.838 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.100 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.100 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.389 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:16.389 10:40:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.389 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:16.390 nvme0n1 00:12:16.390 nvme1n1 00:12:16.390 nvme2n1 00:12:16.390 nvme2n2 00:12:16.390 nvme2n3 00:12:16.390 nvme3n1 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3ef76ec5-02c0-4a14-b3c7-0f20ea37df69"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3ef76ec5-02c0-4a14-b3c7-0f20ea37df69",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fa0addfa-649c-4696-8481-73c24c40aed6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa0addfa-649c-4696-8481-73c24c40aed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6336bb82-0c89-4b98-95ce-b1978d3b8901"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6336bb82-0c89-4b98-95ce-b1978d3b8901",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3e105b52-c726-4c3a-a630-7a6ad6d3e467"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e105b52-c726-4c3a-a630-7a6ad6d3e467",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8d1ee117-2e59-417b-90f4-0c614a48d0b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d1ee117-2e59-417b-90f4-0c614a48d0b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c1c75454-9343-49a2-821c-58f9878016fa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c1c75454-9343-49a2-821c-58f9878016fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:16.390 10:40:42 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69075 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69075 ']' 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69075 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69075 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.390 killing process with pid 69075 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69075' 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69075 00:12:16.390 10:40:42 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69075 00:12:17.779 10:40:43 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:17.779 10:40:43 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:17.779 10:40:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:17.779 10:40:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.779 10:40:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:17.779 ************************************ 00:12:17.779 START TEST bdev_hello_world 00:12:17.779 ************************************ 00:12:17.779 10:40:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:17.779 [2024-11-18 10:40:43.349498] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:17.779 [2024-11-18 10:40:43.349608] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69436 ] 00:12:17.779 [2024-11-18 10:40:43.506372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.779 [2024-11-18 10:40:43.589343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.040 [2024-11-18 10:40:43.872068] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:18.040 [2024-11-18 10:40:43.872105] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:18.040 [2024-11-18 10:40:43.872116] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:18.040 [2024-11-18 10:40:43.873577] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:18.040 [2024-11-18 10:40:43.873886] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:18.040 [2024-11-18 10:40:43.873902] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:18.040 [2024-11-18 10:40:43.874101] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:18.040 00:12:18.040 [2024-11-18 10:40:43.874118] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:18.612 00:12:18.612 real 0m1.126s 00:12:18.612 user 0m0.864s 00:12:18.612 sys 0m0.150s 00:12:18.612 10:40:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.612 10:40:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:18.612 ************************************ 00:12:18.612 END TEST bdev_hello_world 00:12:18.612 ************************************ 00:12:18.612 10:40:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:18.612 10:40:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.612 10:40:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.612 10:40:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.612 ************************************ 00:12:18.612 START TEST bdev_bounds 00:12:18.612 ************************************ 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69467 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:18.612 Process bdevio pid: 69467 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69467' 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69467 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69467 ']' 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:18.612 10:40:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:18.872 [2024-11-18 10:40:44.529272] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:18.872 [2024-11-18 10:40:44.529393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69467 ] 00:12:18.872 [2024-11-18 10:40:44.697234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.132 [2024-11-18 10:40:44.783815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.132 [2024-11-18 10:40:44.783979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.132 [2024-11-18 10:40:44.784000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.705 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.705 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:19.705 10:40:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:19.705 I/O targets: 00:12:19.705 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:19.705 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:19.705 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.705 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.705 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.705 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:19.705 00:12:19.705 00:12:19.705 CUnit - A unit testing framework for C - Version 2.1-3 00:12:19.705 http://cunit.sourceforge.net/ 00:12:19.705 00:12:19.705 00:12:19.705 Suite: bdevio tests on: nvme3n1 00:12:19.705 Test: blockdev write read block ...passed 00:12:19.705 Test: blockdev write zeroes read block ...passed 00:12:19.705 Test: blockdev write zeroes read no split ...passed 00:12:19.705 Test: blockdev write zeroes read split ...passed 00:12:19.705 Test: blockdev write zeroes read split partial ...passed 00:12:19.705 Test: blockdev reset ...passed 00:12:19.705 Test: blockdev write read 8 blocks ...passed 00:12:19.705 Test: blockdev write read size > 128k ...passed 00:12:19.705 Test: blockdev write read invalid size ...passed 00:12:19.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.705 Test: blockdev write read max offset ...passed 00:12:19.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.705 Test: blockdev writev readv 8 blocks ...passed 00:12:19.705 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.705 Test: blockdev writev readv block ...passed 00:12:19.705 Test: blockdev writev readv size > 128k ...passed 00:12:19.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.705 Test: blockdev comparev and writev ...passed 00:12:19.705 Test: blockdev nvme passthru rw ...passed 00:12:19.705 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.705 Test: blockdev nvme admin passthru ...passed 00:12:19.705 Test: blockdev copy ...passed 00:12:19.705 Suite: bdevio tests on: nvme2n3 00:12:19.705 Test: blockdev write read block ...passed 00:12:19.705 Test: blockdev write zeroes read block ...passed 00:12:19.705 Test: blockdev write zeroes read no split ...passed 00:12:19.705 Test: blockdev write zeroes read split ...passed 00:12:19.966 Test: blockdev write zeroes read split partial ...passed 00:12:19.966 Test: blockdev reset ...passed 00:12:19.966 Test: blockdev write read 8 blocks ...passed 00:12:19.966 Test: blockdev write read size > 128k ...passed 00:12:19.966 Test: blockdev write read invalid size ...passed 00:12:19.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.966 Test: blockdev write read max offset ...passed 00:12:19.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.966 Test: blockdev writev readv 8 blocks ...passed 00:12:19.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.966 Test: blockdev writev readv block ...passed 00:12:19.966 Test: blockdev writev readv size > 128k ...passed 00:12:19.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.966 Test: blockdev comparev and writev ...passed 00:12:19.966 Test: blockdev nvme passthru rw ...passed 00:12:19.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.966 Test: blockdev nvme admin passthru ...passed 00:12:19.966 Test: blockdev copy ...passed 00:12:19.966 Suite: bdevio tests on: nvme2n2 00:12:19.966 Test: blockdev write read block ...passed 00:12:19.966 Test: blockdev write zeroes read block ...passed 00:12:19.966 Test: blockdev write zeroes read no split ...passed 00:12:19.966 Test: blockdev write zeroes read split ...passed 00:12:19.966 Test: blockdev write zeroes read split partial ...passed 00:12:19.966 Test: blockdev reset ...passed 00:12:19.966 Test: blockdev write read 8 blocks ...passed 00:12:19.966 Test: blockdev write read size > 128k ...passed 00:12:19.966 Test: blockdev write read invalid size ...passed 00:12:19.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.966 Test: blockdev write read max offset ...passed 00:12:19.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.966 Test: blockdev writev readv 8 blocks ...passed 00:12:19.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.966 Test: blockdev writev readv block ...passed 00:12:19.966 Test: blockdev writev readv size > 128k ...passed 00:12:19.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.966 Test: blockdev comparev and writev ...passed 00:12:19.966 Test: blockdev nvme passthru rw ...passed 00:12:19.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.966 Test: blockdev nvme admin passthru ...passed 00:12:19.966 Test: blockdev copy ...passed 00:12:19.966 Suite: bdevio tests on: nvme2n1 00:12:19.966 Test: blockdev write read block ...passed 00:12:19.966 Test: blockdev write zeroes read block ...passed 00:12:19.966 Test: blockdev write zeroes read no split ...passed 00:12:19.966 Test: blockdev write zeroes read split ...passed 00:12:19.966 Test: blockdev write zeroes read split partial ...passed 00:12:19.966 Test: blockdev reset ...passed 00:12:19.966 Test: blockdev write read 8 blocks ...passed 00:12:19.966 Test: blockdev write read size > 128k ...passed 00:12:19.966 Test: blockdev write read invalid size ...passed 00:12:19.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.966 Test: blockdev write read max offset ...passed 00:12:19.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.966 Test: blockdev writev readv 8 blocks ...passed 00:12:19.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.966 Test: blockdev writev readv block ...passed 00:12:19.966 Test: blockdev writev readv size > 128k ...passed 00:12:19.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.966 Test: blockdev comparev and writev ...passed 00:12:19.966 Test: blockdev nvme passthru rw ...passed 00:12:19.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.966 Test: blockdev nvme admin passthru ...passed 00:12:19.966 Test: blockdev copy ...passed 00:12:19.966 Suite: bdevio tests on: nvme1n1 00:12:19.966 Test: blockdev write read block ...passed 00:12:19.966 Test: blockdev write zeroes read block ...passed 00:12:19.966 Test: blockdev write zeroes read no split ...passed 00:12:19.966 Test: blockdev write zeroes read split ...passed 00:12:19.966 Test: blockdev write zeroes read split partial ...passed 00:12:19.966 Test: blockdev reset ...passed 00:12:19.966 Test: blockdev write read 8 blocks ...passed 00:12:19.966 Test: blockdev write read size > 128k ...passed 00:12:19.966 Test: blockdev write read invalid size ...passed 00:12:19.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.966 Test: blockdev write read max offset ...passed 00:12:19.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.966 Test: blockdev writev readv 8 blocks ...passed 00:12:19.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.226 Test: blockdev writev readv block ...passed 00:12:20.226 Test: blockdev writev readv size > 128k ...passed 00:12:20.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.226 Test: blockdev comparev and writev ...passed 00:12:20.226 Test: blockdev nvme passthru rw ...passed 00:12:20.226 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.226 Test: blockdev nvme admin passthru ...passed 00:12:20.226 Test: blockdev copy ...passed 00:12:20.226 Suite: bdevio tests on: nvme0n1 00:12:20.226 Test: blockdev write read block ...passed 00:12:20.226 Test: blockdev write zeroes read block ...passed 00:12:20.226 Test: blockdev write zeroes read no split ...passed 00:12:20.226 Test: blockdev write zeroes read split ...passed 00:12:20.226 Test: blockdev write zeroes read split partial ...passed 00:12:20.226 Test: blockdev reset ...passed 00:12:20.226 Test: blockdev write read 8 blocks ...passed 00:12:20.226 Test: blockdev write read size > 128k ...passed 00:12:20.226 Test: blockdev write read invalid size ...passed 00:12:20.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.226 Test: blockdev write read max offset ...passed 00:12:20.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.226 Test: blockdev writev readv 8 blocks ...passed 00:12:20.226 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.226 Test: blockdev writev readv block ...passed 00:12:20.226 Test: blockdev writev readv size > 128k ...passed 00:12:20.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.226 Test: blockdev comparev and writev ...passed 00:12:20.226 Test: blockdev nvme passthru rw ...passed 00:12:20.226 Test: blockdev nvme passthru vendor specific ...passed 00:12:20.226 Test: blockdev nvme admin passthru ...passed 00:12:20.226 Test: blockdev copy ...passed 00:12:20.226 00:12:20.226 Run Summary: Type Total Ran Passed Failed Inactive 00:12:20.226 suites 6 6 n/a 0 0 00:12:20.227 tests 138 138 138 0 0 00:12:20.227 asserts 780 780 780 0 n/a 00:12:20.227 00:12:20.227 Elapsed time = 1.115 seconds 00:12:20.227 0 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69467 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69467 ']' 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69467 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69467 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.227 killing process with pid 69467 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69467' 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69467 00:12:20.227 10:40:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69467 00:12:20.798 10:40:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:20.798 00:12:20.798 real 0m2.049s 00:12:20.798 user 0m5.160s 00:12:20.798 sys 0m0.280s 00:12:20.798 10:40:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.798 ************************************ 00:12:20.798 END TEST bdev_bounds 00:12:20.798 ************************************ 00:12:20.798 10:40:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:20.798 10:40:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:20.798 10:40:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.798 10:40:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.798 10:40:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.798 ************************************ 00:12:20.798 START TEST bdev_nbd 00:12:20.798 ************************************ 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:20.798 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69522 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69522 /var/tmp/spdk-nbd.sock 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69522 ']' 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.799 10:40:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:20.799 [2024-11-18 10:40:46.646300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:20.799 [2024-11-18 10:40:46.646444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.060 [2024-11-18 10:40:46.807662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.060 [2024-11-18 10:40:46.893527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.633 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.895 1+0 records in 00:12:21.895 1+0 records out 00:12:21.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579036 s, 7.1 MB/s 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.895 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.156 1+0 records in 00:12:22.156 1+0 records out 00:12:22.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108464 s, 3.8 MB/s 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.156 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.157 10:40:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.418 1+0 records in 00:12:22.418 1+0 records out 00:12:22.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970097 s, 4.2 MB/s 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.418 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.680 1+0 records in 00:12:22.680 1+0 records out 00:12:22.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000993604 s, 4.1 MB/s 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.680 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.942 1+0 records in 00:12:22.942 1+0 records out 00:12:22.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114619 s, 3.6 MB/s 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.942 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.205 1+0 records in 00:12:23.205 1+0 records out 00:12:23.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107783 s, 3.8 MB/s 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:23.205 10:40:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.466 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd0", 00:12:23.466 "bdev_name": "nvme0n1" 00:12:23.466 }, 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd1", 00:12:23.466 "bdev_name": "nvme1n1" 00:12:23.466 }, 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd2", 00:12:23.466 "bdev_name": "nvme2n1" 00:12:23.466 }, 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd3", 00:12:23.466 "bdev_name": "nvme2n2" 00:12:23.466 }, 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd4", 00:12:23.466 "bdev_name": "nvme2n3" 00:12:23.466 }, 00:12:23.466 { 00:12:23.466 "nbd_device": "/dev/nbd5", 00:12:23.466 "bdev_name": "nvme3n1" 00:12:23.466 } 00:12:23.466 ]' 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd0", 00:12:23.467 "bdev_name": "nvme0n1" 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd1", 00:12:23.467 "bdev_name": "nvme1n1" 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd2", 00:12:23.467 "bdev_name": "nvme2n1" 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd3", 00:12:23.467 "bdev_name": "nvme2n2" 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd4", 00:12:23.467 "bdev_name": "nvme2n3" 00:12:23.467 }, 00:12:23.467 { 00:12:23.467 "nbd_device": "/dev/nbd5", 00:12:23.467 "bdev_name": "nvme3n1" 00:12:23.467 } 00:12:23.467 ]' 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.467 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.729 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.991 10:40:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:24.252 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:24.252 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:24.252 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.253 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.515 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.776 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:25.037 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.038 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.038 10:40:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:25.299 /dev/nbd0 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.299 1+0 records in 00:12:25.299 1+0 records out 00:12:25.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340516 s, 12.0 MB/s 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.299 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:25.559 /dev/nbd1 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.559 1+0 records in 00:12:25.559 1+0 records out 00:12:25.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064997 s, 6.3 MB/s 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.559 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:25.820 /dev/nbd10 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.820 1+0 records in 00:12:25.820 1+0 records out 00:12:25.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349536 s, 11.7 MB/s 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.820 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:26.081 /dev/nbd11 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.081 1+0 records in 00:12:26.081 1+0 records out 00:12:26.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047353 s, 8.6 MB/s 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:26.081 /dev/nbd12 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.081 1+0 records in 00:12:26.081 1+0 records out 00:12:26.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509559 s, 8.0 MB/s 00:12:26.081 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:26.342 10:40:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:26.342 /dev/nbd13 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.342 1+0 records in 00:12:26.342 1+0 records out 00:12:26.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492432 s, 8.3 MB/s 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.342 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd0", 00:12:26.603 "bdev_name": "nvme0n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd1", 00:12:26.603 "bdev_name": "nvme1n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd10", 00:12:26.603 "bdev_name": "nvme2n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd11", 00:12:26.603 "bdev_name": "nvme2n2" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd12", 00:12:26.603 "bdev_name": "nvme2n3" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd13", 00:12:26.603 "bdev_name": "nvme3n1" 00:12:26.603 } 00:12:26.603 ]' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd0", 00:12:26.603 "bdev_name": "nvme0n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd1", 00:12:26.603 "bdev_name": "nvme1n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd10", 00:12:26.603 "bdev_name": "nvme2n1" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd11", 00:12:26.603 "bdev_name": "nvme2n2" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd12", 00:12:26.603 "bdev_name": "nvme2n3" 00:12:26.603 }, 00:12:26.603 { 00:12:26.603 "nbd_device": "/dev/nbd13", 00:12:26.603 "bdev_name": "nvme3n1" 00:12:26.603 } 00:12:26.603 ]' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:26.603 /dev/nbd1 00:12:26.603 /dev/nbd10 00:12:26.603 /dev/nbd11 00:12:26.603 /dev/nbd12 00:12:26.603 /dev/nbd13' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:26.603 /dev/nbd1 00:12:26.603 /dev/nbd10 00:12:26.603 /dev/nbd11 00:12:26.603 /dev/nbd12 00:12:26.603 /dev/nbd13' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:26.603 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:26.604 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:26.604 256+0 records in 00:12:26.604 256+0 records out 00:12:26.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00798497 s, 131 MB/s 00:12:26.604 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.604 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:26.865 256+0 records in 00:12:26.865 256+0 records out 00:12:26.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0508201 s, 20.6 MB/s 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:26.865 256+0 records in 00:12:26.865 256+0 records out 00:12:26.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0576029 s, 18.2 MB/s 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:26.865 256+0 records in 00:12:26.865 256+0 records out 00:12:26.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143232 s, 7.3 MB/s 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.865 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:27.126 256+0 records in 00:12:27.126 256+0 records out 00:12:27.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228976 s, 4.6 MB/s 00:12:27.126 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.126 10:40:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:27.387 256+0 records in 00:12:27.388 256+0 records out 00:12:27.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237136 s, 4.4 MB/s 00:12:27.388 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.388 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:27.649 256+0 records in 00:12:27.649 256+0 records out 00:12:27.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23731 s, 4.4 MB/s 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.649 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.910 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.171 10:40:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:28.431 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:28.431 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:28.431 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:28.431 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.432 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.693 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.953 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.954 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.954 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.954 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:28.954 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:28.954 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:29.214 10:40:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:29.214 malloc_lvol_verify 00:12:29.214 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:29.476 45e1ae72-2b53-4ade-887f-69d8ff92d196 00:12:29.476 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:29.737 70758d10-0503-409e-8542-9ac46c6c30f6 00:12:29.737 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:29.998 /dev/nbd0 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:29.998 mke2fs 1.47.0 (5-Feb-2023) 00:12:29.998 Discarding device blocks: 0/4096 done 00:12:29.998 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:29.998 00:12:29.998 Allocating group tables: 0/1 done 00:12:29.998 Writing inode tables: 0/1 done 00:12:29.998 Creating journal (1024 blocks): done 00:12:29.998 Writing superblocks and filesystem accounting information: 0/1 done 00:12:29.998 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.998 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69522 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69522 ']' 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69522 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69522 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.260 killing process with pid 69522 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69522' 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69522 00:12:30.260 10:40:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69522 00:12:30.833 10:40:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:30.833 00:12:30.833 real 0m9.942s 00:12:30.833 user 0m13.763s 00:12:30.833 sys 0m3.380s 00:12:30.833 10:40:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.833 ************************************ 00:12:30.833 END TEST bdev_nbd 00:12:30.833 ************************************ 00:12:30.833 10:40:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:30.833 10:40:56 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:30.833 10:40:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:30.833 10:40:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:30.833 10:40:56 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:30.833 10:40:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.833 10:40:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.833 10:40:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.833 ************************************ 00:12:30.833 START TEST bdev_fio 00:12:30.833 ************************************ 00:12:30.833 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:30.833 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:30.834 ************************************ 00:12:30.834 START TEST bdev_fio_rw_verify 00:12:30.834 ************************************ 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:30.834 10:40:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.096 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.096 fio-3.35 00:12:31.096 Starting 6 threads 00:12:43.333 00:12:43.333 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69925: Mon Nov 18 10:41:07 2024 00:12:43.333 read: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(782MiB/10003msec) 00:12:43.333 slat (usec): min=2, max=3221, avg= 5.52, stdev=16.08 00:12:43.333 clat (usec): min=80, max=9690, avg=963.90, stdev=907.56 00:12:43.333 lat (usec): min=83, max=9703, avg=969.42, stdev=908.56 00:12:43.333 clat percentiles (usec): 00:12:43.333 | 50.000th=[ 562], 99.000th=[ 3982], 99.900th=[ 5538], 99.990th=[ 7439], 00:12:43.333 | 99.999th=[ 9634] 00:12:43.333 write: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(797MiB/10003msec); 0 zone resets 00:12:43.333 slat (usec): min=10, max=6571, avg=33.96, stdev=134.83 00:12:43.333 clat (usec): min=76, max=9345, avg=1129.86, stdev=1014.57 00:12:43.333 lat (usec): min=90, max=9375, avg=1163.82, stdev=1033.17 00:12:43.333 clat percentiles (usec): 00:12:43.333 | 50.000th=[ 685], 99.000th=[ 4490], 99.900th=[ 6194], 99.990th=[ 7898], 00:12:43.333 | 99.999th=[ 9372] 00:12:43.333 bw ( KiB/s): min=46142, max=190039, per=100.00%, avg=81816.89, stdev=6988.02, samples=114 00:12:43.333 iops : min=11533, max=47509, avg=20452.84, stdev=1747.09, samples=114 00:12:43.333 lat (usec) : 100=0.09%, 250=11.90%, 500=29.19%, 750=14.46%, 1000=7.15% 00:12:43.333 lat (msec) : 2=21.12%, 4=14.62%, 10=1.46% 00:12:43.333 cpu : usr=44.17%, sys=32.77%, ctx=6343, majf=0, minf=18572 00:12:43.333 IO depths : 1=11.8%, 2=24.3%, 4=50.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.333 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.333 issued rwts: total=200076,204071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.333 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:43.333 00:12:43.333 Run status group 0 (all jobs): 00:12:43.333 READ: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=782MiB (820MB), run=10003-10003msec 00:12:43.333 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=797MiB (836MB), run=10003-10003msec 00:12:43.333 ----------------------------------------------------- 00:12:43.333 Suppressions used: 00:12:43.333 count bytes template 00:12:43.333 6 48 /usr/src/fio/parse.c 00:12:43.333 3849 369504 /usr/src/fio/iolog.c 00:12:43.333 1 8 libtcmalloc_minimal.so 00:12:43.333 1 904 libcrypto.so 00:12:43.333 ----------------------------------------------------- 00:12:43.333 00:12:43.333 00:12:43.333 real 0m12.024s 00:12:43.333 user 0m28.008s 00:12:43.333 sys 0m20.025s 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.333 ************************************ 00:12:43.333 END TEST bdev_fio_rw_verify 00:12:43.333 ************************************ 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:12:43.333 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3ef76ec5-02c0-4a14-b3c7-0f20ea37df69"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3ef76ec5-02c0-4a14-b3c7-0f20ea37df69",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fa0addfa-649c-4696-8481-73c24c40aed6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa0addfa-649c-4696-8481-73c24c40aed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6336bb82-0c89-4b98-95ce-b1978d3b8901"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6336bb82-0c89-4b98-95ce-b1978d3b8901",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3e105b52-c726-4c3a-a630-7a6ad6d3e467"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e105b52-c726-4c3a-a630-7a6ad6d3e467",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8d1ee117-2e59-417b-90f4-0c614a48d0b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d1ee117-2e59-417b-90f4-0c614a48d0b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c1c75454-9343-49a2-821c-58f9878016fa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c1c75454-9343-49a2-821c-58f9878016fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:43.334 /home/vagrant/spdk_repo/spdk 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:43.334 00:12:43.334 real 0m12.193s 00:12:43.334 user 0m28.081s 00:12:43.334 sys 0m20.098s 00:12:43.334 ************************************ 00:12:43.334 END TEST bdev_fio 00:12:43.334 ************************************ 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.334 10:41:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:43.334 10:41:08 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:43.334 10:41:08 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:43.334 10:41:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:43.334 10:41:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.334 10:41:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.334 ************************************ 00:12:43.334 START TEST bdev_verify 00:12:43.334 ************************************ 00:12:43.334 10:41:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:43.334 [2024-11-18 10:41:08.903736] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:43.334 [2024-11-18 10:41:08.903874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70098 ] 00:12:43.334 [2024-11-18 10:41:09.068012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:43.334 [2024-11-18 10:41:09.189917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.334 [2024-11-18 10:41:09.190007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.904 Running I/O for 5 seconds... 00:12:46.267 21792.00 IOPS, 85.12 MiB/s [2024-11-18T10:41:13.116Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-18T10:41:14.053Z] 23946.67 IOPS, 93.54 MiB/s [2024-11-18T10:41:14.989Z] 23448.00 IOPS, 91.59 MiB/s [2024-11-18T10:41:14.989Z] 23225.60 IOPS, 90.72 MiB/s 00:12:49.105 Latency(us) 00:12:49.105 [2024-11-18T10:41:14.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.105 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0xa0000 00:12:49.105 nvme0n1 : 5.05 1774.52 6.93 0.00 0.00 72007.65 8368.44 75416.81 00:12:49.105 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0xa0000 length 0xa0000 00:12:49.105 nvme0n1 : 5.07 1841.54 7.19 0.00 0.00 68838.68 9326.28 76223.41 00:12:49.105 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0xbd0bd 00:12:49.105 nvme1n1 : 5.06 2308.95 9.02 0.00 0.00 55171.06 4763.96 59284.87 00:12:49.105 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:49.105 nvme1n1 : 5.08 2412.05 9.42 0.00 0.00 52431.95 6452.78 61301.37 00:12:49.105 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0x80000 00:12:49.105 nvme2n1 : 5.06 1871.93 7.31 0.00 0.00 68036.04 9225.45 64931.05 00:12:49.105 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x80000 length 0x80000 00:12:49.105 nvme2n1 : 5.08 1888.82 7.38 0.00 0.00 66968.30 5217.67 67754.14 00:12:49.105 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0x80000 00:12:49.105 nvme2n2 : 5.05 1798.20 7.02 0.00 0.00 70702.22 11947.72 67350.84 00:12:49.105 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x80000 length 0x80000 00:12:49.105 nvme2n2 : 5.05 1876.33 7.33 0.00 0.00 68080.34 5167.26 75013.51 00:12:49.105 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0x80000 00:12:49.105 nvme2n3 : 5.06 1772.40 6.92 0.00 0.00 71610.72 12603.08 66544.25 00:12:49.105 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x80000 length 0x80000 00:12:49.105 nvme2n3 : 5.06 1845.15 7.21 0.00 0.00 69093.19 7763.50 68157.44 00:12:49.105 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x0 length 0x20000 00:12:49.105 nvme3n1 : 5.06 1771.73 6.92 0.00 0.00 71498.84 4965.61 75820.11 00:12:49.105 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.105 Verification LBA range: start 0x20000 length 0x20000 00:12:49.105 nvme3n1 : 5.07 1844.61 7.21 0.00 0.00 68888.70 10586.58 69770.63 00:12:49.105 [2024-11-18T10:41:14.989Z] =================================================================================================================== 00:12:49.105 [2024-11-18T10:41:14.989Z] Total : 23006.23 89.87 0.00 0.00 66292.22 4763.96 76223.41 00:12:49.674 00:12:49.674 real 0m6.647s 00:12:49.674 user 0m10.786s 00:12:49.674 sys 0m1.403s 00:12:49.674 10:41:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.674 ************************************ 00:12:49.674 END TEST bdev_verify 00:12:49.674 ************************************ 00:12:49.674 10:41:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:49.674 10:41:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.674 10:41:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:49.674 10:41:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.674 10:41:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.674 ************************************ 00:12:49.674 START TEST bdev_verify_big_io 00:12:49.674 ************************************ 00:12:49.674 10:41:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.933 [2024-11-18 10:41:15.610426] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:49.933 [2024-11-18 10:41:15.610537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70195 ] 00:12:49.933 [2024-11-18 10:41:15.766302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.192 [2024-11-18 10:41:15.862076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.192 [2024-11-18 10:41:15.862150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.450 Running I/O for 5 seconds... 00:12:56.541 1248.00 IOPS, 78.00 MiB/s [2024-11-18T10:41:22.425Z] 2653.50 IOPS, 165.84 MiB/s 00:12:56.541 Latency(us) 00:12:56.541 [2024-11-18T10:41:22.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.541 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0xa000 00:12:56.541 nvme0n1 : 6.05 127.04 7.94 0.00 0.00 987623.25 6755.25 1167952.34 00:12:56.541 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0xa000 length 0xa000 00:12:56.541 nvme0n1 : 6.03 74.34 4.65 0.00 0.00 1667249.34 179064.52 3136048.84 00:12:56.541 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0xbd0b 00:12:56.541 nvme1n1 : 6.05 116.33 7.27 0.00 0.00 1041557.16 12351.02 1393799.48 00:12:56.541 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:56.541 nvme1n1 : 5.99 144.84 9.05 0.00 0.00 831396.87 7864.32 1819682.66 00:12:56.541 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0x8000 00:12:56.541 nvme2n1 : 6.05 95.16 5.95 0.00 0.00 1230308.69 11292.36 1219574.55 00:12:56.541 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x8000 length 0x8000 00:12:56.541 nvme2n1 : 5.99 138.94 8.68 0.00 0.00 834837.03 185517.29 796917.76 00:12:56.541 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0x8000 00:12:56.541 nvme2n2 : 6.05 113.63 7.10 0.00 0.00 993225.41 10687.41 1464780.01 00:12:56.541 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x8000 length 0x8000 00:12:56.541 nvme2n2 : 6.00 149.39 9.34 0.00 0.00 766195.06 72997.02 693673.35 00:12:56.541 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0x8000 00:12:56.541 nvme2n3 : 6.05 95.23 5.95 0.00 0.00 1139023.73 13409.67 1264743.98 00:12:56.541 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x8000 length 0x8000 00:12:56.541 nvme2n3 : 6.00 125.28 7.83 0.00 0.00 885431.11 6654.42 1509949.44 00:12:56.541 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x0 length 0x2000 00:12:56.541 nvme3n1 : 6.05 74.04 4.63 0.00 0.00 1412968.65 7813.91 1910021.51 00:12:56.541 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.541 Verification LBA range: start 0x2000 length 0x2000 00:12:56.541 nvme3n1 : 6.01 111.80 6.99 0.00 0.00 969645.44 3062.55 2413337.99 00:12:56.541 [2024-11-18T10:41:22.425Z] =================================================================================================================== 00:12:56.541 [2024-11-18T10:41:22.425Z] Total : 1366.01 85.38 0.00 0.00 1013628.94 3062.55 3136048.84 00:12:57.476 00:12:57.476 real 0m7.735s 00:12:57.476 user 0m14.332s 00:12:57.476 sys 0m0.359s 00:12:57.476 10:41:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.476 ************************************ 00:12:57.476 END TEST bdev_verify_big_io 00:12:57.476 ************************************ 00:12:57.476 10:41:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.476 10:41:23 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.476 10:41:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:57.476 10:41:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.476 10:41:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.476 ************************************ 00:12:57.476 START TEST bdev_write_zeroes 00:12:57.476 ************************************ 00:12:57.476 10:41:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.734 [2024-11-18 10:41:23.413008] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:57.734 [2024-11-18 10:41:23.413121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70305 ] 00:12:57.734 [2024-11-18 10:41:23.574218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.991 [2024-11-18 10:41:23.670411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.249 Running I/O for 1 seconds... 00:12:59.442 105632.00 IOPS, 412.62 MiB/s 00:12:59.442 Latency(us) 00:12:59.442 [2024-11-18T10:41:25.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.442 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme0n1 : 1.01 17338.02 67.73 0.00 0.00 7373.74 5142.06 19660.80 00:12:59.442 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme1n1 : 1.02 18342.84 71.65 0.00 0.00 6952.92 4360.66 15829.46 00:12:59.442 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme2n1 : 1.02 17368.91 67.85 0.00 0.00 7310.53 4385.87 16636.06 00:12:59.442 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme2n2 : 1.03 17185.54 67.13 0.00 0.00 7382.24 4385.87 16535.24 00:12:59.442 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme2n3 : 1.03 17153.06 67.00 0.00 0.00 7394.35 4486.70 16434.41 00:12:59.442 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.442 nvme3n1 : 1.02 17273.05 67.47 0.00 0.00 7335.86 4511.90 15829.46 00:12:59.442 [2024-11-18T10:41:25.326Z] =================================================================================================================== 00:12:59.442 [2024-11-18T10:41:25.326Z] Total : 104661.42 408.83 0.00 0.00 7288.23 4360.66 19660.80 00:13:00.006 00:13:00.006 real 0m2.463s 00:13:00.006 user 0m1.885s 00:13:00.006 sys 0m0.380s 00:13:00.006 10:41:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.006 10:41:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:00.006 ************************************ 00:13:00.006 END TEST bdev_write_zeroes 00:13:00.006 ************************************ 00:13:00.006 10:41:25 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.006 10:41:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:00.006 10:41:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.006 10:41:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.006 ************************************ 00:13:00.006 START TEST bdev_json_nonenclosed 00:13:00.006 ************************************ 00:13:00.006 10:41:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.264 [2024-11-18 10:41:25.939502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:00.264 [2024-11-18 10:41:25.939611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70352 ] 00:13:00.264 [2024-11-18 10:41:26.099257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.523 [2024-11-18 10:41:26.192713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.523 [2024-11-18 10:41:26.192780] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.523 [2024-11-18 10:41:26.192795] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:00.523 [2024-11-18 10:41:26.192804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.523 00:13:00.523 real 0m0.490s 00:13:00.523 user 0m0.293s 00:13:00.523 sys 0m0.093s 00:13:00.523 10:41:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.523 10:41:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:00.523 ************************************ 00:13:00.523 END TEST bdev_json_nonenclosed 00:13:00.523 ************************************ 00:13:00.781 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.781 10:41:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:00.781 10:41:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.781 10:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.781 ************************************ 00:13:00.781 START TEST bdev_json_nonarray 00:13:00.781 ************************************ 00:13:00.781 10:41:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.781 [2024-11-18 10:41:26.495085] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:00.781 [2024-11-18 10:41:26.495193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70378 ] 00:13:00.781 [2024-11-18 10:41:26.655160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.044 [2024-11-18 10:41:26.748113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.044 [2024-11-18 10:41:26.748184] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:01.044 [2024-11-18 10:41:26.748201] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:01.044 [2024-11-18 10:41:26.748222] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.044 00:13:01.044 real 0m0.492s 00:13:01.044 user 0m0.294s 00:13:01.044 sys 0m0.094s 00:13:01.044 10:41:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.044 ************************************ 00:13:01.044 END TEST bdev_json_nonarray 00:13:01.044 ************************************ 00:13:01.044 10:41:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:01.303 10:41:26 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:01.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.151 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.411 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.412 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.412 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.412 00:13:08.412 real 0m59.084s 00:13:08.412 user 1m24.562s 00:13:08.412 sys 0m35.641s 00:13:08.412 ************************************ 00:13:08.412 END TEST blockdev_xnvme 00:13:08.412 10:41:34 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.412 10:41:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.412 ************************************ 00:13:08.412 10:41:34 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:08.412 10:41:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.412 10:41:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.412 10:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:08.412 ************************************ 00:13:08.412 START TEST ublk 00:13:08.412 ************************************ 00:13:08.412 10:41:34 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:08.673 * Looking for test storage... 00:13:08.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:08.673 10:41:34 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.673 10:41:34 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.673 10:41:34 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.673 10:41:34 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.673 10:41:34 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.673 10:41:34 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.673 10:41:34 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.673 10:41:34 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.673 10:41:34 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.673 10:41:34 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.673 10:41:34 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.673 10:41:34 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.673 10:41:34 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.673 10:41:34 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.674 10:41:34 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:08.674 10:41:34 ublk -- scripts/common.sh@345 -- # : 1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.674 10:41:34 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.674 10:41:34 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@353 -- # local d=1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.674 10:41:34 ublk -- scripts/common.sh@355 -- # echo 1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.674 10:41:34 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:08.674 10:41:34 ublk -- scripts/common.sh@353 -- # local d=2 00:13:08.674 10:41:34 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.674 10:41:34 ublk -- scripts/common.sh@355 -- # echo 2 00:13:08.674 10:41:34 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.674 10:41:34 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.674 10:41:34 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.674 10:41:34 ublk -- scripts/common.sh@368 -- # return 0 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.674 --rc genhtml_branch_coverage=1 00:13:08.674 --rc genhtml_function_coverage=1 00:13:08.674 --rc genhtml_legend=1 00:13:08.674 --rc geninfo_all_blocks=1 00:13:08.674 --rc geninfo_unexecuted_blocks=1 00:13:08.674 00:13:08.674 ' 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.674 --rc genhtml_branch_coverage=1 00:13:08.674 --rc genhtml_function_coverage=1 00:13:08.674 --rc genhtml_legend=1 00:13:08.674 --rc geninfo_all_blocks=1 00:13:08.674 --rc geninfo_unexecuted_blocks=1 00:13:08.674 00:13:08.674 ' 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.674 --rc genhtml_branch_coverage=1 00:13:08.674 --rc genhtml_function_coverage=1 00:13:08.674 --rc genhtml_legend=1 00:13:08.674 --rc geninfo_all_blocks=1 00:13:08.674 --rc geninfo_unexecuted_blocks=1 00:13:08.674 00:13:08.674 ' 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.674 --rc genhtml_branch_coverage=1 00:13:08.674 --rc genhtml_function_coverage=1 00:13:08.674 --rc genhtml_legend=1 00:13:08.674 --rc geninfo_all_blocks=1 00:13:08.674 --rc geninfo_unexecuted_blocks=1 00:13:08.674 00:13:08.674 ' 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:08.674 10:41:34 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:08.674 10:41:34 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:08.674 10:41:34 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:08.674 10:41:34 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:08.674 10:41:34 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:08.674 10:41:34 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:08.674 10:41:34 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:08.674 10:41:34 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:08.674 10:41:34 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.674 10:41:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.674 ************************************ 00:13:08.674 START TEST test_save_ublk_config 00:13:08.674 ************************************ 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70677 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70677 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70677 ']' 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.674 10:41:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.674 [2024-11-18 10:41:34.544728] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:08.674 [2024-11-18 10:41:34.544882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70677 ] 00:13:08.936 [2024-11-18 10:41:34.708521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.197 [2024-11-18 10:41:34.828377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.769 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:09.769 [2024-11-18 10:41:35.554232] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:09.769 [2024-11-18 10:41:35.555139] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:09.769 malloc0 00:13:09.769 [2024-11-18 10:41:35.626371] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:09.769 [2024-11-18 10:41:35.626472] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:09.769 [2024-11-18 10:41:35.626483] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:09.769 [2024-11-18 10:41:35.626491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.769 [2024-11-18 10:41:35.635332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.769 [2024-11-18 10:41:35.635359] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.769 [2024-11-18 10:41:35.642245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.769 [2024-11-18 10:41:35.642378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:10.029 [2024-11-18 10:41:35.659241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:10.029 0 00:13:10.029 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.029 10:41:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:10.029 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.029 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.291 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.291 10:41:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:10.291 "subsystems": [ 00:13:10.291 { 00:13:10.291 "subsystem": "fsdev", 00:13:10.291 "config": [ 00:13:10.291 { 00:13:10.291 "method": "fsdev_set_opts", 00:13:10.291 "params": { 00:13:10.291 "fsdev_io_pool_size": 65535, 00:13:10.291 "fsdev_io_cache_size": 256 00:13:10.291 } 00:13:10.291 } 00:13:10.291 ] 00:13:10.291 }, 00:13:10.291 { 00:13:10.291 "subsystem": "keyring", 00:13:10.291 "config": [] 00:13:10.291 }, 00:13:10.291 { 00:13:10.291 "subsystem": "iobuf", 00:13:10.291 "config": [ 00:13:10.291 { 00:13:10.291 "method": "iobuf_set_options", 00:13:10.291 "params": { 00:13:10.291 "small_pool_count": 8192, 00:13:10.291 "large_pool_count": 1024, 00:13:10.291 "small_bufsize": 8192, 00:13:10.291 "large_bufsize": 135168, 00:13:10.291 "enable_numa": false 00:13:10.291 } 00:13:10.291 } 00:13:10.291 ] 00:13:10.291 }, 00:13:10.291 { 00:13:10.291 "subsystem": "sock", 00:13:10.291 "config": [ 00:13:10.291 { 00:13:10.291 "method": "sock_set_default_impl", 00:13:10.291 "params": { 00:13:10.291 "impl_name": "posix" 00:13:10.291 } 00:13:10.291 }, 00:13:10.291 { 00:13:10.291 "method": "sock_impl_set_options", 00:13:10.291 "params": { 00:13:10.291 "impl_name": "ssl", 00:13:10.291 "recv_buf_size": 4096, 00:13:10.291 "send_buf_size": 4096, 00:13:10.291 "enable_recv_pipe": true, 00:13:10.291 "enable_quickack": false, 00:13:10.291 "enable_placement_id": 0, 00:13:10.291 "enable_zerocopy_send_server": true, 00:13:10.291 "enable_zerocopy_send_client": false, 00:13:10.291 "zerocopy_threshold": 0, 00:13:10.291 "tls_version": 0, 00:13:10.291 "enable_ktls": false 00:13:10.291 } 00:13:10.291 }, 00:13:10.291 { 00:13:10.291 "method": "sock_impl_set_options", 00:13:10.291 "params": { 00:13:10.291 "impl_name": "posix", 00:13:10.291 "recv_buf_size": 2097152, 00:13:10.291 "send_buf_size": 2097152, 00:13:10.291 "enable_recv_pipe": true, 00:13:10.291 "enable_quickack": false, 00:13:10.291 "enable_placement_id": 0, 00:13:10.291 "enable_zerocopy_send_server": true, 00:13:10.291 "enable_zerocopy_send_client": false, 00:13:10.291 "zerocopy_threshold": 0, 00:13:10.292 "tls_version": 0, 00:13:10.292 "enable_ktls": false 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "vmd", 00:13:10.292 "config": [] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "accel", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "accel_set_options", 00:13:10.292 "params": { 00:13:10.292 "small_cache_size": 128, 00:13:10.292 "large_cache_size": 16, 00:13:10.292 "task_count": 2048, 00:13:10.292 "sequence_count": 2048, 00:13:10.292 "buf_count": 2048 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "bdev", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "bdev_set_options", 00:13:10.292 "params": { 00:13:10.292 "bdev_io_pool_size": 65535, 00:13:10.292 "bdev_io_cache_size": 256, 00:13:10.292 "bdev_auto_examine": true, 00:13:10.292 "iobuf_small_cache_size": 128, 00:13:10.292 "iobuf_large_cache_size": 16 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_raid_set_options", 00:13:10.292 "params": { 00:13:10.292 "process_window_size_kb": 1024, 00:13:10.292 "process_max_bandwidth_mb_sec": 0 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_iscsi_set_options", 00:13:10.292 "params": { 00:13:10.292 "timeout_sec": 30 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_nvme_set_options", 00:13:10.292 "params": { 00:13:10.292 "action_on_timeout": "none", 00:13:10.292 "timeout_us": 0, 00:13:10.292 "timeout_admin_us": 0, 00:13:10.292 "keep_alive_timeout_ms": 10000, 00:13:10.292 "arbitration_burst": 0, 00:13:10.292 "low_priority_weight": 0, 00:13:10.292 "medium_priority_weight": 0, 00:13:10.292 "high_priority_weight": 0, 00:13:10.292 "nvme_adminq_poll_period_us": 10000, 00:13:10.292 "nvme_ioq_poll_period_us": 0, 00:13:10.292 "io_queue_requests": 0, 00:13:10.292 "delay_cmd_submit": true, 00:13:10.292 "transport_retry_count": 4, 00:13:10.292 "bdev_retry_count": 3, 00:13:10.292 "transport_ack_timeout": 0, 00:13:10.292 "ctrlr_loss_timeout_sec": 0, 00:13:10.292 "reconnect_delay_sec": 0, 00:13:10.292 "fast_io_fail_timeout_sec": 0, 00:13:10.292 "disable_auto_failback": false, 00:13:10.292 "generate_uuids": false, 00:13:10.292 "transport_tos": 0, 00:13:10.292 "nvme_error_stat": false, 00:13:10.292 "rdma_srq_size": 0, 00:13:10.292 "io_path_stat": false, 00:13:10.292 "allow_accel_sequence": false, 00:13:10.292 "rdma_max_cq_size": 0, 00:13:10.292 "rdma_cm_event_timeout_ms": 0, 00:13:10.292 "dhchap_digests": [ 00:13:10.292 "sha256", 00:13:10.292 "sha384", 00:13:10.292 "sha512" 00:13:10.292 ], 00:13:10.292 "dhchap_dhgroups": [ 00:13:10.292 "null", 00:13:10.292 "ffdhe2048", 00:13:10.292 "ffdhe3072", 00:13:10.292 "ffdhe4096", 00:13:10.292 "ffdhe6144", 00:13:10.292 "ffdhe8192" 00:13:10.292 ] 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_nvme_set_hotplug", 00:13:10.292 "params": { 00:13:10.292 "period_us": 100000, 00:13:10.292 "enable": false 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_malloc_create", 00:13:10.292 "params": { 00:13:10.292 "name": "malloc0", 00:13:10.292 "num_blocks": 8192, 00:13:10.292 "block_size": 4096, 00:13:10.292 "physical_block_size": 4096, 00:13:10.292 "uuid": "30603b4a-433a-4538-80b1-7b191abef81d", 00:13:10.292 "optimal_io_boundary": 0, 00:13:10.292 "md_size": 0, 00:13:10.292 "dif_type": 0, 00:13:10.292 "dif_is_head_of_md": false, 00:13:10.292 "dif_pi_format": 0 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "bdev_wait_for_examine" 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "scsi", 00:13:10.292 "config": null 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "scheduler", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "framework_set_scheduler", 00:13:10.292 "params": { 00:13:10.292 "name": "static" 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "vhost_scsi", 00:13:10.292 "config": [] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "vhost_blk", 00:13:10.292 "config": [] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "ublk", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "ublk_create_target", 00:13:10.292 "params": { 00:13:10.292 "cpumask": "1" 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "ublk_start_disk", 00:13:10.292 "params": { 00:13:10.292 "bdev_name": "malloc0", 00:13:10.292 "ublk_id": 0, 00:13:10.292 "num_queues": 1, 00:13:10.292 "queue_depth": 128 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "nbd", 00:13:10.292 "config": [] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "nvmf", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "nvmf_set_config", 00:13:10.292 "params": { 00:13:10.292 "discovery_filter": "match_any", 00:13:10.292 "admin_cmd_passthru": { 00:13:10.292 "identify_ctrlr": false 00:13:10.292 }, 00:13:10.292 "dhchap_digests": [ 00:13:10.292 "sha256", 00:13:10.292 "sha384", 00:13:10.292 "sha512" 00:13:10.292 ], 00:13:10.292 "dhchap_dhgroups": [ 00:13:10.292 "null", 00:13:10.292 "ffdhe2048", 00:13:10.292 "ffdhe3072", 00:13:10.292 "ffdhe4096", 00:13:10.292 "ffdhe6144", 00:13:10.292 "ffdhe8192" 00:13:10.292 ] 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "nvmf_set_max_subsystems", 00:13:10.292 "params": { 00:13:10.292 "max_subsystems": 1024 00:13:10.292 } 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "method": "nvmf_set_crdt", 00:13:10.292 "params": { 00:13:10.292 "crdt1": 0, 00:13:10.292 "crdt2": 0, 00:13:10.292 "crdt3": 0 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }, 00:13:10.292 { 00:13:10.292 "subsystem": "iscsi", 00:13:10.292 "config": [ 00:13:10.292 { 00:13:10.292 "method": "iscsi_set_options", 00:13:10.292 "params": { 00:13:10.292 "node_base": "iqn.2016-06.io.spdk", 00:13:10.292 "max_sessions": 128, 00:13:10.292 "max_connections_per_session": 2, 00:13:10.292 "max_queue_depth": 64, 00:13:10.292 "default_time2wait": 2, 00:13:10.292 "default_time2retain": 20, 00:13:10.292 "first_burst_length": 8192, 00:13:10.292 "immediate_data": true, 00:13:10.292 "allow_duplicated_isid": false, 00:13:10.292 "error_recovery_level": 0, 00:13:10.292 "nop_timeout": 60, 00:13:10.292 "nop_in_interval": 30, 00:13:10.292 "disable_chap": false, 00:13:10.292 "require_chap": false, 00:13:10.292 "mutual_chap": false, 00:13:10.292 "chap_group": 0, 00:13:10.292 "max_large_datain_per_connection": 64, 00:13:10.292 "max_r2t_per_connection": 4, 00:13:10.292 "pdu_pool_size": 36864, 00:13:10.292 "immediate_data_pool_size": 16384, 00:13:10.292 "data_out_pool_size": 2048 00:13:10.292 } 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 } 00:13:10.292 ] 00:13:10.292 }' 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70677 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70677 ']' 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70677 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70677 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.292 killing process with pid 70677 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70677' 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70677 00:13:10.292 10:41:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70677 00:13:11.681 [2024-11-18 10:41:37.505239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:11.681 [2024-11-18 10:41:37.544371] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:11.681 [2024-11-18 10:41:37.544490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:11.681 [2024-11-18 10:41:37.553267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:11.681 [2024-11-18 10:41:37.553321] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:11.681 [2024-11-18 10:41:37.553333] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:11.681 [2024-11-18 10:41:37.553357] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:11.681 [2024-11-18 10:41:37.553487] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70738 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70738 00:13:13.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70738 ']' 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:13.075 10:41:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:13.075 "subsystems": [ 00:13:13.075 { 00:13:13.075 "subsystem": "fsdev", 00:13:13.075 "config": [ 00:13:13.075 { 00:13:13.075 "method": "fsdev_set_opts", 00:13:13.075 "params": { 00:13:13.075 "fsdev_io_pool_size": 65535, 00:13:13.075 "fsdev_io_cache_size": 256 00:13:13.075 } 00:13:13.075 } 00:13:13.075 ] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "keyring", 00:13:13.075 "config": [] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "iobuf", 00:13:13.075 "config": [ 00:13:13.075 { 00:13:13.075 "method": "iobuf_set_options", 00:13:13.075 "params": { 00:13:13.075 "small_pool_count": 8192, 00:13:13.075 "large_pool_count": 1024, 00:13:13.075 "small_bufsize": 8192, 00:13:13.075 "large_bufsize": 135168, 00:13:13.075 "enable_numa": false 00:13:13.075 } 00:13:13.075 } 00:13:13.075 ] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "sock", 00:13:13.075 "config": [ 00:13:13.075 { 00:13:13.075 "method": "sock_set_default_impl", 00:13:13.075 "params": { 00:13:13.075 "impl_name": "posix" 00:13:13.075 } 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "method": "sock_impl_set_options", 00:13:13.075 "params": { 00:13:13.075 "impl_name": "ssl", 00:13:13.075 "recv_buf_size": 4096, 00:13:13.075 "send_buf_size": 4096, 00:13:13.075 "enable_recv_pipe": true, 00:13:13.075 "enable_quickack": false, 00:13:13.075 "enable_placement_id": 0, 00:13:13.075 "enable_zerocopy_send_server": true, 00:13:13.075 "enable_zerocopy_send_client": false, 00:13:13.075 "zerocopy_threshold": 0, 00:13:13.075 "tls_version": 0, 00:13:13.075 "enable_ktls": false 00:13:13.075 } 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "method": "sock_impl_set_options", 00:13:13.075 "params": { 00:13:13.075 "impl_name": "posix", 00:13:13.075 "recv_buf_size": 2097152, 00:13:13.075 "send_buf_size": 2097152, 00:13:13.075 "enable_recv_pipe": true, 00:13:13.075 "enable_quickack": false, 00:13:13.075 "enable_placement_id": 0, 00:13:13.075 "enable_zerocopy_send_server": true, 00:13:13.075 "enable_zerocopy_send_client": false, 00:13:13.075 "zerocopy_threshold": 0, 00:13:13.075 "tls_version": 0, 00:13:13.075 "enable_ktls": false 00:13:13.075 } 00:13:13.075 } 00:13:13.075 ] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "vmd", 00:13:13.075 "config": [] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "accel", 00:13:13.075 "config": [ 00:13:13.075 { 00:13:13.075 "method": "accel_set_options", 00:13:13.075 "params": { 00:13:13.075 "small_cache_size": 128, 00:13:13.075 "large_cache_size": 16, 00:13:13.075 "task_count": 2048, 00:13:13.075 "sequence_count": 2048, 00:13:13.075 "buf_count": 2048 00:13:13.075 } 00:13:13.075 } 00:13:13.075 ] 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "subsystem": "bdev", 00:13:13.075 "config": [ 00:13:13.075 { 00:13:13.075 "method": "bdev_set_options", 00:13:13.075 "params": { 00:13:13.075 "bdev_io_pool_size": 65535, 00:13:13.075 "bdev_io_cache_size": 256, 00:13:13.075 "bdev_auto_examine": true, 00:13:13.075 "iobuf_small_cache_size": 128, 00:13:13.075 "iobuf_large_cache_size": 16 00:13:13.075 } 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "method": "bdev_raid_set_options", 00:13:13.075 "params": { 00:13:13.075 "process_window_size_kb": 1024, 00:13:13.075 "process_max_bandwidth_mb_sec": 0 00:13:13.075 } 00:13:13.075 }, 00:13:13.075 { 00:13:13.075 "method": "bdev_iscsi_set_options", 00:13:13.076 "params": { 00:13:13.076 "timeout_sec": 30 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "bdev_nvme_set_options", 00:13:13.076 "params": { 00:13:13.076 "action_on_timeout": "none", 00:13:13.076 "timeout_us": 0, 00:13:13.076 "timeout_admin_us": 0, 00:13:13.076 "keep_alive_timeout_ms": 10000, 00:13:13.076 "arbitration_burst": 0, 00:13:13.076 "low_priority_weight": 0, 00:13:13.076 "medium_priority_weight": 0, 00:13:13.076 "high_priority_weight": 0, 00:13:13.076 "nvme_adminq_poll_period_us": 10000, 00:13:13.076 "nvme_ioq_poll_period_us": 0, 00:13:13.076 "io_queue_requests": 0, 00:13:13.076 "delay_cmd_submit": true, 00:13:13.076 "transport_retry_count": 4, 00:13:13.076 "bdev_retry_count": 3, 00:13:13.076 "transport_ack_timeout": 0, 00:13:13.076 "ctrlr_loss_timeout_sec": 0, 00:13:13.076 "reconnect_delay_sec": 0, 00:13:13.076 "fast_io_fail_timeout_sec": 0, 00:13:13.076 "disable_auto_failback": false, 00:13:13.076 "generate_uuids": false, 00:13:13.076 "transport_tos": 0, 00:13:13.076 "nvme_error_stat": false, 00:13:13.076 "rdma_srq_size": 0, 00:13:13.076 "io_path_stat": false, 00:13:13.076 "allow_accel_sequence": false, 00:13:13.076 "rdma_max_cq_size": 0, 00:13:13.076 "rdma_cm_event_timeout_ms": 0, 00:13:13.076 "dhchap_digests": [ 00:13:13.076 "sha256", 00:13:13.076 "sha384", 00:13:13.076 "sha512" 00:13:13.076 ], 00:13:13.076 "dhchap_dhgroups": [ 00:13:13.076 "null", 00:13:13.076 "ffdhe2048", 00:13:13.076 "ffdhe3072", 00:13:13.076 "ffdhe4096", 00:13:13.076 "ffdhe6144", 00:13:13.076 "ffdhe8192" 00:13:13.076 ] 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "bdev_nvme_set_hotplug", 00:13:13.076 "params": { 00:13:13.076 "period_us": 100000, 00:13:13.076 "enable": false 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "bdev_malloc_create", 00:13:13.076 "params": { 00:13:13.076 "name": "malloc0", 00:13:13.076 "num_blocks": 8192, 00:13:13.076 "block_size": 4096, 00:13:13.076 "physical_block_size": 4096, 00:13:13.076 "uuid": "30603b4a-433a-4538-80b1-7b191abef81d", 00:13:13.076 "optimal_io_boundary": 0, 00:13:13.076 "md_size": 0, 00:13:13.076 "dif_type": 0, 00:13:13.076 "dif_is_head_of_md": false, 00:13:13.076 "dif_pi_format": 0 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "bdev_wait_for_examine" 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "scsi", 00:13:13.076 "config": null 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "scheduler", 00:13:13.076 "config": [ 00:13:13.076 { 00:13:13.076 "method": "framework_set_scheduler", 00:13:13.076 "params": { 00:13:13.076 "name": "static" 00:13:13.076 } 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "vhost_scsi", 00:13:13.076 "config": [] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "vhost_blk", 00:13:13.076 "config": [] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "ublk", 00:13:13.076 "config": [ 00:13:13.076 { 00:13:13.076 "method": "ublk_create_target", 00:13:13.076 "params": { 00:13:13.076 "cpumask": "1" 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "ublk_start_disk", 00:13:13.076 "params": { 00:13:13.076 "bdev_name": "malloc0", 00:13:13.076 "ublk_id": 0, 00:13:13.076 "num_queues": 1, 00:13:13.076 "queue_depth": 128 00:13:13.076 } 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "nbd", 00:13:13.076 "config": [] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "nvmf", 00:13:13.076 "config": [ 00:13:13.076 { 00:13:13.076 "method": "nvmf_set_config", 00:13:13.076 "params": { 00:13:13.076 "discovery_filter": "match_any", 00:13:13.076 "admin_cmd_passthru": { 00:13:13.076 "identify_ctrlr": false 00:13:13.076 }, 00:13:13.076 "dhchap_digests": [ 00:13:13.076 "sha256", 00:13:13.076 "sha384", 00:13:13.076 "sha512" 00:13:13.076 ], 00:13:13.076 "dhchap_dhgroups": [ 00:13:13.076 "null", 00:13:13.076 "ffdhe2048", 00:13:13.076 "ffdhe3072", 00:13:13.076 "ffdhe4096", 00:13:13.076 "ffdhe6144", 00:13:13.076 "ffdhe8192" 00:13:13.076 ] 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "nvmf_set_max_subsystems", 00:13:13.076 "params": { 00:13:13.076 "max_subsystems": 1024 00:13:13.076 } 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "method": "nvmf_set_crdt", 00:13:13.076 "params": { 00:13:13.076 "crdt1": 0, 00:13:13.076 "crdt2": 0, 00:13:13.076 "crdt3": 0 00:13:13.076 } 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 }, 00:13:13.076 { 00:13:13.076 "subsystem": "iscsi", 00:13:13.076 "config": [ 00:13:13.076 { 00:13:13.076 "method": "iscsi_set_options", 00:13:13.076 "params": { 00:13:13.076 "node_base": "iqn.2016-06.io.spdk", 00:13:13.076 "max_sessions": 128, 00:13:13.076 "max_connections_per_session": 2, 00:13:13.076 "max_queue_depth": 64, 00:13:13.076 "default_time2wait": 2, 00:13:13.076 "default_time2retain": 20, 00:13:13.076 "first_burst_length": 8192, 00:13:13.076 "immediate_data": true, 00:13:13.076 "allow_duplicated_isid": false, 00:13:13.076 "error_recovery_level": 0, 00:13:13.076 "nop_timeout": 60, 00:13:13.076 "nop_in_interval": 30, 00:13:13.076 "disable_chap": false, 00:13:13.076 "require_chap": false, 00:13:13.076 "mutual_chap": false, 00:13:13.076 "chap_group": 0, 00:13:13.076 "max_large_datain_per_connection": 64, 00:13:13.076 "max_r2t_per_connection": 4, 00:13:13.076 "pdu_pool_size": 36864, 00:13:13.076 "immediate_data_pool_size": 16384, 00:13:13.076 "data_out_pool_size": 2048 00:13:13.076 } 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 } 00:13:13.076 ] 00:13:13.076 }' 00:13:13.076 [2024-11-18 10:41:38.798814] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:13.076 [2024-11-18 10:41:38.798934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:13:13.076 [2024-11-18 10:41:38.955147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.337 [2024-11-18 10:41:39.041456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.909 [2024-11-18 10:41:39.684220] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:13.909 [2024-11-18 10:41:39.684867] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:13.909 [2024-11-18 10:41:39.692325] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:13.909 [2024-11-18 10:41:39.692386] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:13.909 [2024-11-18 10:41:39.692394] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:13.909 [2024-11-18 10:41:39.692399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:13.909 [2024-11-18 10:41:39.701271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:13.909 [2024-11-18 10:41:39.701292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:13.909 [2024-11-18 10:41:39.708226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:13.909 [2024-11-18 10:41:39.708308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:13.909 [2024-11-18 10:41:39.725226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70738 00:13:13.909 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70738 ']' 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70738 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70738 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.170 killing process with pid 70738 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70738' 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70738 00:13:14.170 10:41:39 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70738 00:13:15.205 [2024-11-18 10:41:40.812673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.205 [2024-11-18 10:41:40.847231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.205 [2024-11-18 10:41:40.847331] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.205 [2024-11-18 10:41:40.855229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.205 [2024-11-18 10:41:40.855269] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:15.205 [2024-11-18 10:41:40.855275] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:15.205 [2024-11-18 10:41:40.855296] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:15.205 [2024-11-18 10:41:40.855403] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:16.590 10:41:42 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:16.590 00:13:16.590 real 0m7.702s 00:13:16.590 user 0m4.922s 00:13:16.590 sys 0m3.423s 00:13:16.590 10:41:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.590 10:41:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:16.590 ************************************ 00:13:16.590 END TEST test_save_ublk_config 00:13:16.590 ************************************ 00:13:16.590 10:41:42 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70805 00:13:16.590 10:41:42 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.590 10:41:42 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70805 00:13:16.590 10:41:42 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@835 -- # '[' -z 70805 ']' 00:13:16.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.590 10:41:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:16.590 [2024-11-18 10:41:42.277042] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:16.590 [2024-11-18 10:41:42.277158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:13:16.590 [2024-11-18 10:41:42.428036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:16.852 [2024-11-18 10:41:42.505318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.852 [2024-11-18 10:41:42.505407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.424 10:41:43 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.424 10:41:43 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:17.424 10:41:43 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:17.424 10:41:43 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.424 10:41:43 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.424 10:41:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.424 ************************************ 00:13:17.424 START TEST test_create_ublk 00:13:17.424 ************************************ 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.424 [2024-11-18 10:41:43.079222] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:17.424 [2024-11-18 10:41:43.080737] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.424 [2024-11-18 10:41:43.231330] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:17.424 [2024-11-18 10:41:43.231624] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:17.424 [2024-11-18 10:41:43.231638] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:17.424 [2024-11-18 10:41:43.231644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:17.424 [2024-11-18 10:41:43.239415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:17.424 [2024-11-18 10:41:43.239433] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:17.424 [2024-11-18 10:41:43.247229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:17.424 [2024-11-18 10:41:43.254275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:17.424 [2024-11-18 10:41:43.270223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:17.424 10:41:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:17.424 { 00:13:17.424 "ublk_device": "/dev/ublkb0", 00:13:17.424 "id": 0, 00:13:17.424 "queue_depth": 512, 00:13:17.424 "num_queues": 4, 00:13:17.424 "bdev_name": "Malloc0" 00:13:17.424 } 00:13:17.424 ]' 00:13:17.424 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:17.685 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:17.685 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:17.686 10:41:43 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:17.686 10:41:43 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:17.686 fio: verification read phase will never start because write phase uses all of runtime 00:13:17.686 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:17.686 fio-3.35 00:13:17.686 Starting 1 process 00:13:29.920 00:13:29.920 fio_test: (groupid=0, jobs=1): err= 0: pid=70851: Mon Nov 18 10:41:53 2024 00:13:29.920 write: IOPS=20.2k, BW=78.7MiB/s (82.6MB/s)(787MiB/10001msec); 0 zone resets 00:13:29.920 clat (usec): min=32, max=4043, avg=48.81, stdev=80.27 00:13:29.920 lat (usec): min=33, max=4043, avg=49.27, stdev=80.28 00:13:29.920 clat percentiles (usec): 00:13:29.920 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 43], 00:13:29.920 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:13:29.920 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 53], 95.00th=[ 59], 00:13:29.920 | 99.00th=[ 68], 99.50th=[ 74], 99.90th=[ 1237], 99.95th=[ 2409], 00:13:29.920 | 99.99th=[ 3425] 00:13:29.920 bw ( KiB/s): min=74640, max=88328, per=100.00%, avg=81003.37, stdev=3524.12, samples=19 00:13:29.920 iops : min=18660, max=22082, avg=20250.84, stdev=881.03, samples=19 00:13:29.920 lat (usec) : 50=84.34%, 100=15.40%, 250=0.11%, 500=0.02%, 750=0.01% 00:13:29.920 lat (usec) : 1000=0.01% 00:13:29.920 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:13:29.920 cpu : usr=3.76%, sys=15.95%, ctx=201588, majf=0, minf=797 00:13:29.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.920 issued rwts: total=0,201589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.920 00:13:29.920 Run status group 0 (all jobs): 00:13:29.920 WRITE: bw=78.7MiB/s (82.6MB/s), 78.7MiB/s-78.7MiB/s (82.6MB/s-82.6MB/s), io=787MiB (826MB), run=10001-10001msec 00:13:29.920 00:13:29.920 Disk stats (read/write): 00:13:29.920 ublkb0: ios=0/199712, merge=0/0, ticks=0/8049, in_queue=8050, util=99.11% 00:13:29.920 10:41:53 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:29.920 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.920 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.920 [2024-11-18 10:41:53.677089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:29.920 [2024-11-18 10:41:53.721258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:29.920 [2024-11-18 10:41:53.721835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:29.920 [2024-11-18 10:41:53.730255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:29.921 [2024-11-18 10:41:53.734421] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:29.921 [2024-11-18 10:41:53.734437] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:53 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:53.745286] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:29.921 request: 00:13:29.921 { 00:13:29.921 "ublk_id": 0, 00:13:29.921 "method": "ublk_stop_disk", 00:13:29.921 "req_id": 1 00:13:29.921 } 00:13:29.921 Got JSON-RPC error response 00:13:29.921 response: 00:13:29.921 { 00:13:29.921 "code": -19, 00:13:29.921 "message": "No such device" 00:13:29.921 } 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:29.921 10:41:53 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:53.761281] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:29.921 [2024-11-18 10:41:53.764880] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:29.921 [2024-11-18 10:41:53.764914] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:53 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:29.921 ************************************ 00:13:29.921 END TEST test_create_ublk 00:13:29.921 ************************************ 00:13:29.921 10:41:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:29.921 00:13:29.921 real 0m11.155s 00:13:29.921 user 0m0.683s 00:13:29.921 sys 0m1.673s 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:29.921 10:41:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.921 10:41:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.921 10:41:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 ************************************ 00:13:29.921 START TEST test_create_multi_ublk 00:13:29.921 ************************************ 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:54.275219] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:29.921 [2024-11-18 10:41:54.276801] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:54.491324] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:29.921 [2024-11-18 10:41:54.491624] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:29.921 [2024-11-18 10:41:54.491635] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:29.921 [2024-11-18 10:41:54.491643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.921 [2024-11-18 10:41:54.515232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.921 [2024-11-18 10:41:54.515253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.921 [2024-11-18 10:41:54.527225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.921 [2024-11-18 10:41:54.527716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:29.921 [2024-11-18 10:41:54.567226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:54.779324] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:29.921 [2024-11-18 10:41:54.779611] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:29.921 [2024-11-18 10:41:54.779624] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:29.921 [2024-11-18 10:41:54.779630] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.921 [2024-11-18 10:41:54.787244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.921 [2024-11-18 10:41:54.787263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.921 [2024-11-18 10:41:54.795236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.921 [2024-11-18 10:41:54.795734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:29.921 [2024-11-18 10:41:54.803259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.921 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.921 [2024-11-18 10:41:54.963308] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:29.922 [2024-11-18 10:41:54.963600] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:29.922 [2024-11-18 10:41:54.963611] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:29.922 [2024-11-18 10:41:54.963617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.922 [2024-11-18 10:41:54.971240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.922 [2024-11-18 10:41:54.971260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.922 [2024-11-18 10:41:54.979230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.922 [2024-11-18 10:41:54.979718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:29.922 [2024-11-18 10:41:54.982906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.922 10:41:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.922 [2024-11-18 10:41:55.135325] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:29.922 [2024-11-18 10:41:55.135616] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:29.922 [2024-11-18 10:41:55.135628] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:29.922 [2024-11-18 10:41:55.135633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.922 [2024-11-18 10:41:55.144403] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.922 [2024-11-18 10:41:55.144420] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.922 [2024-11-18 10:41:55.151235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.922 [2024-11-18 10:41:55.151724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:29.922 [2024-11-18 10:41:55.160248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:29.922 { 00:13:29.922 "ublk_device": "/dev/ublkb0", 00:13:29.922 "id": 0, 00:13:29.922 "queue_depth": 512, 00:13:29.922 "num_queues": 4, 00:13:29.922 "bdev_name": "Malloc0" 00:13:29.922 }, 00:13:29.922 { 00:13:29.922 "ublk_device": "/dev/ublkb1", 00:13:29.922 "id": 1, 00:13:29.922 "queue_depth": 512, 00:13:29.922 "num_queues": 4, 00:13:29.922 "bdev_name": "Malloc1" 00:13:29.922 }, 00:13:29.922 { 00:13:29.922 "ublk_device": "/dev/ublkb2", 00:13:29.922 "id": 2, 00:13:29.922 "queue_depth": 512, 00:13:29.922 "num_queues": 4, 00:13:29.922 "bdev_name": "Malloc2" 00:13:29.922 }, 00:13:29.922 { 00:13:29.922 "ublk_device": "/dev/ublkb3", 00:13:29.922 "id": 3, 00:13:29.922 "queue_depth": 512, 00:13:29.922 "num_queues": 4, 00:13:29.922 "bdev_name": "Malloc3" 00:13:29.922 } 00:13:29.922 ]' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:29.922 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.183 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:30.183 [2024-11-18 10:41:55.831298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:30.183 [2024-11-18 10:41:55.863583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:30.183 [2024-11-18 10:41:55.864650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:30.183 [2024-11-18 10:41:55.871235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:30.183 [2024-11-18 10:41:55.871470] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:30.184 [2024-11-18 10:41:55.871483] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:30.184 [2024-11-18 10:41:55.886276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:30.184 [2024-11-18 10:41:55.919234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:30.184 [2024-11-18 10:41:55.919922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:30.184 [2024-11-18 10:41:55.928250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:30.184 [2024-11-18 10:41:55.928476] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:30.184 [2024-11-18 10:41:55.928489] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:30.184 [2024-11-18 10:41:55.943290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:30.184 [2024-11-18 10:41:55.988225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:30.184 [2024-11-18 10:41:55.988877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:30.184 [2024-11-18 10:41:55.995221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:30.184 [2024-11-18 10:41:55.995459] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:30.184 [2024-11-18 10:41:55.995473] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.184 10:41:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:30.184 [2024-11-18 10:41:56.003289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:30.184 [2024-11-18 10:41:56.037692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:30.184 [2024-11-18 10:41:56.038618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:30.184 [2024-11-18 10:41:56.048257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:30.184 [2024-11-18 10:41:56.048485] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:30.184 [2024-11-18 10:41:56.048494] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:30.184 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.184 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:30.445 [2024-11-18 10:41:56.239270] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:30.445 [2024-11-18 10:41:56.242811] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:30.445 [2024-11-18 10:41:56.242839] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:30.445 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:30.445 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:30.445 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:30.445 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.445 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.017 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.017 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:31.017 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:31.017 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.017 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.277 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.277 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:31.277 10:41:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:31.277 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.277 10:41:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.537 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.798 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:31.798 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:31.798 ************************************ 00:13:31.798 END TEST test_create_multi_ublk 00:13:31.798 ************************************ 00:13:31.798 10:41:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:31.798 00:13:31.798 real 0m3.191s 00:13:31.798 user 0m0.831s 00:13:31.798 sys 0m0.132s 00:13:31.798 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.798 10:41:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:31.798 10:41:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:31.798 10:41:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:31.798 10:41:57 ublk -- ublk/ublk.sh@130 -- # killprocess 70805 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@954 -- # '[' -z 70805 ']' 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@958 -- # kill -0 70805 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@959 -- # uname 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70805 00:13:31.798 killing process with pid 70805 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70805' 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@973 -- # kill 70805 00:13:31.798 10:41:57 ublk -- common/autotest_common.sh@978 -- # wait 70805 00:13:32.368 [2024-11-18 10:41:58.032980] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:32.368 [2024-11-18 10:41:58.033168] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:32.938 00:13:32.938 real 0m24.400s 00:13:32.938 user 0m34.031s 00:13:32.938 sys 0m10.510s 00:13:32.938 10:41:58 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.938 10:41:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.938 ************************************ 00:13:32.938 END TEST ublk 00:13:32.938 ************************************ 00:13:32.938 10:41:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:32.938 10:41:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.938 10:41:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.938 10:41:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.938 ************************************ 00:13:32.938 START TEST ublk_recovery 00:13:32.938 ************************************ 00:13:32.938 10:41:58 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:32.938 * Looking for test storage... 00:13:32.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:32.938 10:41:58 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.938 10:41:58 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.938 10:41:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.199 10:41:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.199 10:41:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.200 10:41:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.200 --rc genhtml_branch_coverage=1 00:13:33.200 --rc genhtml_function_coverage=1 00:13:33.200 --rc genhtml_legend=1 00:13:33.200 --rc geninfo_all_blocks=1 00:13:33.200 --rc geninfo_unexecuted_blocks=1 00:13:33.200 00:13:33.200 ' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.200 --rc genhtml_branch_coverage=1 00:13:33.200 --rc genhtml_function_coverage=1 00:13:33.200 --rc genhtml_legend=1 00:13:33.200 --rc geninfo_all_blocks=1 00:13:33.200 --rc geninfo_unexecuted_blocks=1 00:13:33.200 00:13:33.200 ' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.200 --rc genhtml_branch_coverage=1 00:13:33.200 --rc genhtml_function_coverage=1 00:13:33.200 --rc genhtml_legend=1 00:13:33.200 --rc geninfo_all_blocks=1 00:13:33.200 --rc geninfo_unexecuted_blocks=1 00:13:33.200 00:13:33.200 ' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.200 --rc genhtml_branch_coverage=1 00:13:33.200 --rc genhtml_function_coverage=1 00:13:33.200 --rc genhtml_legend=1 00:13:33.200 --rc geninfo_all_blocks=1 00:13:33.200 --rc geninfo_unexecuted_blocks=1 00:13:33.200 00:13:33.200 ' 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:33.200 10:41:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71197 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71197 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71197 ']' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.200 10:41:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:33.200 10:41:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.200 [2024-11-18 10:41:58.985172] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:33.200 [2024-11-18 10:41:58.985350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ] 00:13:33.477 [2024-11-18 10:41:59.147049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:33.477 [2024-11-18 10:41:59.239932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.477 [2024-11-18 10:41:59.240033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:34.050 10:41:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.050 [2024-11-18 10:41:59.808222] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:34.050 [2024-11-18 10:41:59.809786] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.050 10:41:59 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.050 malloc0 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.050 10:41:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.050 [2024-11-18 10:41:59.888496] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:34.050 [2024-11-18 10:41:59.888575] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:34.050 [2024-11-18 10:41:59.888584] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:34.050 [2024-11-18 10:41:59.888591] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:34.050 [2024-11-18 10:41:59.897302] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:34.050 [2024-11-18 10:41:59.897319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:34.050 [2024-11-18 10:41:59.904233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:34.050 [2024-11-18 10:41:59.904353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:34.050 [2024-11-18 10:41:59.915235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:34.050 1 00:13:34.050 10:41:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.050 10:41:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:35.436 10:42:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71233 00:13:35.436 10:42:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:35.436 10:42:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:35.436 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.436 fio-3.35 00:13:35.436 Starting 1 process 00:13:40.769 10:42:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71197 00:13:40.769 10:42:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:46.066 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71197 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:46.066 10:42:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:46.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.066 10:42:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71344 00:13:46.066 10:42:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:46.066 10:42:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71344 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71344 ']' 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.066 10:42:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.066 [2024-11-18 10:42:11.021861] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:46.066 [2024-11-18 10:42:11.022187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71344 ] 00:13:46.066 [2024-11-18 10:42:11.183803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.066 [2024-11-18 10:42:11.299141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.066 [2024-11-18 10:42:11.299248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:46.066 10:42:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.066 [2024-11-18 10:42:11.898225] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:46.066 [2024-11-18 10:42:11.900057] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.066 10:42:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.066 10:42:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.327 malloc0 00:13:46.327 10:42:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.327 10:42:11 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:46.327 10:42:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.327 10:42:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.327 [2024-11-18 10:42:12.002355] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:46.327 [2024-11-18 10:42:12.002395] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:46.327 [2024-11-18 10:42:12.002405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:46.327 [2024-11-18 10:42:12.010254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:46.327 [2024-11-18 10:42:12.010281] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:13:46.327 [2024-11-18 10:42:12.010289] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:46.327 [2024-11-18 10:42:12.010367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:46.327 1 00:13:46.327 10:42:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.327 10:42:12 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71233 00:13:46.327 [2024-11-18 10:42:12.018232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:46.327 [2024-11-18 10:42:12.021513] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:46.327 [2024-11-18 10:42:12.026405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:46.327 [2024-11-18 10:42:12.026425] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:42.589 00:14:42.589 fio_test: (groupid=0, jobs=1): err= 0: pid=71236: Mon Nov 18 10:43:01 2024 00:14:42.589 read: IOPS=26.0k, BW=102MiB/s (107MB/s)(6102MiB/60002msec) 00:14:42.589 slat (nsec): min=1091, max=194553, avg=5237.37, stdev=1450.65 00:14:42.589 clat (usec): min=636, max=6106.4k, avg=2412.30, stdev=39064.86 00:14:42.589 lat (usec): min=641, max=6106.4k, avg=2417.54, stdev=39064.85 00:14:42.589 clat percentiles (usec): 00:14:42.589 | 1.00th=[ 1713], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:14:42.589 | 30.00th=[ 1876], 40.00th=[ 1909], 50.00th=[ 2114], 60.00th=[ 2147], 00:14:42.589 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2606], 95.00th=[ 2999], 00:14:42.589 | 99.00th=[ 4948], 99.50th=[ 5538], 99.90th=[ 6915], 99.95th=[ 7111], 00:14:42.589 | 99.99th=[ 8848] 00:14:42.589 bw ( KiB/s): min=10488, max=131864, per=100.00%, avg=114602.81, stdev=17210.52, samples=108 00:14:42.589 iops : min= 2622, max=32966, avg=28650.70, stdev=4302.63, samples=108 00:14:42.589 write: IOPS=26.0k, BW=102MiB/s (107MB/s)(6095MiB/60002msec); 0 zone resets 00:14:42.589 slat (nsec): min=1092, max=176308, avg=5391.76, stdev=1462.28 00:14:42.589 clat (usec): min=655, max=6106.5k, avg=2495.23, stdev=39086.88 00:14:42.589 lat (usec): min=660, max=6106.5k, avg=2500.62, stdev=39086.87 00:14:42.589 clat percentiles (usec): 00:14:42.589 | 1.00th=[ 1762], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:14:42.589 | 30.00th=[ 1958], 40.00th=[ 2008], 50.00th=[ 2212], 60.00th=[ 2245], 00:14:42.589 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2704], 95.00th=[ 2966], 00:14:42.589 | 99.00th=[ 4948], 99.50th=[ 5604], 99.90th=[ 6915], 99.95th=[ 7242], 00:14:42.589 | 99.99th=[ 8979] 00:14:42.589 bw ( KiB/s): min=10048, max=131072, per=100.00%, avg=114469.59, stdev=17246.37, samples=108 00:14:42.589 iops : min= 2512, max=32768, avg=28617.40, stdev=4311.59, samples=108 00:14:42.589 lat (usec) : 750=0.01%, 1000=0.01% 00:14:42.589 lat (msec) : 2=42.12%, 4=55.42%, 10=2.45%, 20=0.01%, >=2000=0.01% 00:14:42.589 cpu : usr=5.87%, sys=28.39%, ctx=103901, majf=0, minf=13 00:14:42.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:42.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.589 issued rwts: total=1562214,1560391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.589 00:14:42.589 Run status group 0 (all jobs): 00:14:42.589 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=6102MiB (6399MB), run=60002-60002msec 00:14:42.589 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=6095MiB (6391MB), run=60002-60002msec 00:14:42.589 00:14:42.589 Disk stats (read/write): 00:14:42.589 ublkb1: ios=1558870/1556900, merge=0/0, ticks=3677656/3670818, in_queue=7348475, util=99.89% 00:14:42.589 10:43:01 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.589 [2024-11-18 10:43:01.172788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:42.589 [2024-11-18 10:43:01.218242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:42.589 [2024-11-18 10:43:01.218383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:42.589 [2024-11-18 10:43:01.227229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:42.589 [2024-11-18 10:43:01.227334] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:42.589 [2024-11-18 10:43:01.227344] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.589 10:43:01 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.589 [2024-11-18 10:43:01.234301] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:42.589 [2024-11-18 10:43:01.237937] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:42.589 [2024-11-18 10:43:01.237968] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.589 10:43:01 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:42.589 10:43:01 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:42.589 10:43:01 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71344 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71344 ']' 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71344 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71344 00:14:42.589 killing process with pid 71344 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71344' 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71344 00:14:42.589 10:43:01 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71344 00:14:42.589 [2024-11-18 10:43:02.300504] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:42.589 [2024-11-18 10:43:02.300547] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:42.589 ************************************ 00:14:42.589 END TEST ublk_recovery 00:14:42.589 ************************************ 00:14:42.589 00:14:42.589 real 1m4.257s 00:14:42.589 user 1m41.550s 00:14:42.589 sys 0m36.650s 00:14:42.589 10:43:02 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.589 10:43:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.589 10:43:03 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:14:42.589 10:43:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:14:42.589 10:43:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:42.589 10:43:03 -- common/autotest_common.sh@10 -- # set +x 00:14:42.589 10:43:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:14:42.589 10:43:03 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:42.589 10:43:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:42.589 10:43:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.589 10:43:03 -- common/autotest_common.sh@10 -- # set +x 00:14:42.589 ************************************ 00:14:42.589 START TEST ftl 00:14:42.589 ************************************ 00:14:42.589 10:43:03 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:42.589 * Looking for test storage... 00:14:42.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.589 10:43:03 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:42.589 10:43:03 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:14:42.589 10:43:03 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:42.589 10:43:03 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:42.589 10:43:03 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.589 10:43:03 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.589 10:43:03 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.589 10:43:03 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.589 10:43:03 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.589 10:43:03 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.589 10:43:03 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.589 10:43:03 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.589 10:43:03 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.589 10:43:03 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.589 10:43:03 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.590 10:43:03 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:42.590 10:43:03 ftl -- scripts/common.sh@345 -- # : 1 00:14:42.590 10:43:03 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.590 10:43:03 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.590 10:43:03 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:42.590 10:43:03 ftl -- scripts/common.sh@353 -- # local d=1 00:14:42.590 10:43:03 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.590 10:43:03 ftl -- scripts/common.sh@355 -- # echo 1 00:14:42.590 10:43:03 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.590 10:43:03 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:42.590 10:43:03 ftl -- scripts/common.sh@353 -- # local d=2 00:14:42.590 10:43:03 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.590 10:43:03 ftl -- scripts/common.sh@355 -- # echo 2 00:14:42.590 10:43:03 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.590 10:43:03 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.590 10:43:03 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.590 10:43:03 ftl -- scripts/common.sh@368 -- # return 0 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.590 --rc genhtml_branch_coverage=1 00:14:42.590 --rc genhtml_function_coverage=1 00:14:42.590 --rc genhtml_legend=1 00:14:42.590 --rc geninfo_all_blocks=1 00:14:42.590 --rc geninfo_unexecuted_blocks=1 00:14:42.590 00:14:42.590 ' 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.590 --rc genhtml_branch_coverage=1 00:14:42.590 --rc genhtml_function_coverage=1 00:14:42.590 --rc genhtml_legend=1 00:14:42.590 --rc geninfo_all_blocks=1 00:14:42.590 --rc geninfo_unexecuted_blocks=1 00:14:42.590 00:14:42.590 ' 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.590 --rc genhtml_branch_coverage=1 00:14:42.590 --rc genhtml_function_coverage=1 00:14:42.590 --rc genhtml_legend=1 00:14:42.590 --rc geninfo_all_blocks=1 00:14:42.590 --rc geninfo_unexecuted_blocks=1 00:14:42.590 00:14:42.590 ' 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.590 --rc genhtml_branch_coverage=1 00:14:42.590 --rc genhtml_function_coverage=1 00:14:42.590 --rc genhtml_legend=1 00:14:42.590 --rc geninfo_all_blocks=1 00:14:42.590 --rc geninfo_unexecuted_blocks=1 00:14:42.590 00:14:42.590 ' 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:42.590 10:43:03 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:42.590 10:43:03 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.590 10:43:03 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.590 10:43:03 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:42.590 10:43:03 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:42.590 10:43:03 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.590 10:43:03 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.590 10:43:03 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.590 10:43:03 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:42.590 10:43:03 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:42.590 10:43:03 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:42.590 10:43:03 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:42.590 10:43:03 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.590 10:43:03 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.590 10:43:03 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:42.590 10:43:03 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:42.590 10:43:03 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:42.590 10:43:03 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:42.590 10:43:03 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:42.590 10:43:03 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:42.590 10:43:03 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:42.590 10:43:03 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:42.590 10:43:03 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:42.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:42.590 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:42.590 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:42.590 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:42.590 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72148 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72148 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@835 -- # '[' -z 72148 ']' 00:14:42.590 10:43:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:42.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.590 10:43:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:42.590 [2024-11-18 10:43:03.819639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:42.590 [2024-11-18 10:43:03.820008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72148 ] 00:14:42.590 [2024-11-18 10:43:03.980243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.590 [2024-11-18 10:43:04.100261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.590 10:43:04 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.590 10:43:04 ftl -- common/autotest_common.sh@868 -- # return 0 00:14:42.590 10:43:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:42.590 10:43:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:42.590 10:43:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:42.590 10:43:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@50 -- # break 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@63 -- # break 00:14:42.590 10:43:06 ftl -- ftl/ftl.sh@66 -- # killprocess 72148 00:14:42.590 10:43:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 72148 ']' 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@958 -- # kill -0 72148 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@959 -- # uname 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72148 00:14:42.591 killing process with pid 72148 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72148' 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@973 -- # kill 72148 00:14:42.591 10:43:06 ftl -- common/autotest_common.sh@978 -- # wait 72148 00:14:42.591 10:43:07 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:42.591 10:43:07 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:42.591 10:43:07 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:42.591 10:43:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.591 10:43:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:42.591 ************************************ 00:14:42.591 START TEST ftl_fio_basic 00:14:42.591 ************************************ 00:14:42.591 10:43:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:42.591 * Looking for test storage... 00:14:42.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.591 --rc genhtml_branch_coverage=1 00:14:42.591 --rc genhtml_function_coverage=1 00:14:42.591 --rc genhtml_legend=1 00:14:42.591 --rc geninfo_all_blocks=1 00:14:42.591 --rc geninfo_unexecuted_blocks=1 00:14:42.591 00:14:42.591 ' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.591 --rc genhtml_branch_coverage=1 00:14:42.591 --rc genhtml_function_coverage=1 00:14:42.591 --rc genhtml_legend=1 00:14:42.591 --rc geninfo_all_blocks=1 00:14:42.591 --rc geninfo_unexecuted_blocks=1 00:14:42.591 00:14:42.591 ' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.591 --rc genhtml_branch_coverage=1 00:14:42.591 --rc genhtml_function_coverage=1 00:14:42.591 --rc genhtml_legend=1 00:14:42.591 --rc geninfo_all_blocks=1 00:14:42.591 --rc geninfo_unexecuted_blocks=1 00:14:42.591 00:14:42.591 ' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:42.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.591 --rc genhtml_branch_coverage=1 00:14:42.591 --rc genhtml_function_coverage=1 00:14:42.591 --rc genhtml_legend=1 00:14:42.591 --rc geninfo_all_blocks=1 00:14:42.591 --rc geninfo_unexecuted_blocks=1 00:14:42.591 00:14:42.591 ' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:42.591 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72288 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72288 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72288 ']' 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.592 10:43:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:42.592 [2024-11-18 10:43:08.193548] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:42.592 [2024-11-18 10:43:08.193776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72288 ] 00:14:42.592 [2024-11-18 10:43:08.347953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.592 [2024-11-18 10:43:08.446736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.592 [2024-11-18 10:43:08.447077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.592 [2024-11-18 10:43:08.447009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:43.163 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:43.429 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:43.689 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:43.689 { 00:14:43.689 "name": "nvme0n1", 00:14:43.689 "aliases": [ 00:14:43.689 "f0c19d9d-e33c-49d5-a211-7fd476e9a974" 00:14:43.689 ], 00:14:43.689 "product_name": "NVMe disk", 00:14:43.689 "block_size": 4096, 00:14:43.689 "num_blocks": 1310720, 00:14:43.689 "uuid": "f0c19d9d-e33c-49d5-a211-7fd476e9a974", 00:14:43.689 "numa_id": -1, 00:14:43.689 "assigned_rate_limits": { 00:14:43.689 "rw_ios_per_sec": 0, 00:14:43.689 "rw_mbytes_per_sec": 0, 00:14:43.689 "r_mbytes_per_sec": 0, 00:14:43.689 "w_mbytes_per_sec": 0 00:14:43.689 }, 00:14:43.689 "claimed": false, 00:14:43.689 "zoned": false, 00:14:43.689 "supported_io_types": { 00:14:43.689 "read": true, 00:14:43.689 "write": true, 00:14:43.689 "unmap": true, 00:14:43.689 "flush": true, 00:14:43.689 "reset": true, 00:14:43.689 "nvme_admin": true, 00:14:43.689 "nvme_io": true, 00:14:43.689 "nvme_io_md": false, 00:14:43.689 "write_zeroes": true, 00:14:43.689 "zcopy": false, 00:14:43.689 "get_zone_info": false, 00:14:43.689 "zone_management": false, 00:14:43.689 "zone_append": false, 00:14:43.689 "compare": true, 00:14:43.689 "compare_and_write": false, 00:14:43.689 "abort": true, 00:14:43.689 "seek_hole": false, 00:14:43.689 "seek_data": false, 00:14:43.689 "copy": true, 00:14:43.690 "nvme_iov_md": false 00:14:43.690 }, 00:14:43.690 "driver_specific": { 00:14:43.690 "nvme": [ 00:14:43.690 { 00:14:43.690 "pci_address": "0000:00:11.0", 00:14:43.690 "trid": { 00:14:43.690 "trtype": "PCIe", 00:14:43.690 "traddr": "0000:00:11.0" 00:14:43.690 }, 00:14:43.690 "ctrlr_data": { 00:14:43.690 "cntlid": 0, 00:14:43.690 "vendor_id": "0x1b36", 00:14:43.690 "model_number": "QEMU NVMe Ctrl", 00:14:43.690 "serial_number": "12341", 00:14:43.690 "firmware_revision": "8.0.0", 00:14:43.690 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:43.690 "oacs": { 00:14:43.690 "security": 0, 00:14:43.690 "format": 1, 00:14:43.690 "firmware": 0, 00:14:43.690 "ns_manage": 1 00:14:43.690 }, 00:14:43.690 "multi_ctrlr": false, 00:14:43.690 "ana_reporting": false 00:14:43.690 }, 00:14:43.690 "vs": { 00:14:43.690 "nvme_version": "1.4" 00:14:43.690 }, 00:14:43.690 "ns_data": { 00:14:43.690 "id": 1, 00:14:43.690 "can_share": false 00:14:43.690 } 00:14:43.690 } 00:14:43.690 ], 00:14:43.690 "mp_policy": "active_passive" 00:14:43.690 } 00:14:43.690 } 00:14:43.690 ]' 00:14:43.690 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:43.690 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:43.690 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:43.947 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:43.948 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:44.205 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b2377955-7aa2-4528-a4e6-7e355c5b1bd2 00:14:44.205 10:43:09 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2377955-7aa2-4528-a4e6-7e355c5b1bd2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:44.465 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:44.725 { 00:14:44.725 "name": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:44.725 "aliases": [ 00:14:44.725 "lvs/nvme0n1p0" 00:14:44.725 ], 00:14:44.725 "product_name": "Logical Volume", 00:14:44.725 "block_size": 4096, 00:14:44.725 "num_blocks": 26476544, 00:14:44.725 "uuid": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:44.725 "assigned_rate_limits": { 00:14:44.725 "rw_ios_per_sec": 0, 00:14:44.725 "rw_mbytes_per_sec": 0, 00:14:44.725 "r_mbytes_per_sec": 0, 00:14:44.725 "w_mbytes_per_sec": 0 00:14:44.725 }, 00:14:44.725 "claimed": false, 00:14:44.725 "zoned": false, 00:14:44.725 "supported_io_types": { 00:14:44.725 "read": true, 00:14:44.725 "write": true, 00:14:44.725 "unmap": true, 00:14:44.725 "flush": false, 00:14:44.725 "reset": true, 00:14:44.725 "nvme_admin": false, 00:14:44.725 "nvme_io": false, 00:14:44.725 "nvme_io_md": false, 00:14:44.725 "write_zeroes": true, 00:14:44.725 "zcopy": false, 00:14:44.725 "get_zone_info": false, 00:14:44.725 "zone_management": false, 00:14:44.725 "zone_append": false, 00:14:44.725 "compare": false, 00:14:44.725 "compare_and_write": false, 00:14:44.725 "abort": false, 00:14:44.725 "seek_hole": true, 00:14:44.725 "seek_data": true, 00:14:44.725 "copy": false, 00:14:44.725 "nvme_iov_md": false 00:14:44.725 }, 00:14:44.725 "driver_specific": { 00:14:44.725 "lvol": { 00:14:44.725 "lvol_store_uuid": "b2377955-7aa2-4528-a4e6-7e355c5b1bd2", 00:14:44.725 "base_bdev": "nvme0n1", 00:14:44.725 "thin_provision": true, 00:14:44.725 "num_allocated_clusters": 0, 00:14:44.725 "snapshot": false, 00:14:44.725 "clone": false, 00:14:44.725 "esnap_clone": false 00:14:44.725 } 00:14:44.725 } 00:14:44.725 } 00:14:44.725 ]' 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:44.725 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:44.986 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:45.248 { 00:14:45.248 "name": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:45.248 "aliases": [ 00:14:45.248 "lvs/nvme0n1p0" 00:14:45.248 ], 00:14:45.248 "product_name": "Logical Volume", 00:14:45.248 "block_size": 4096, 00:14:45.248 "num_blocks": 26476544, 00:14:45.248 "uuid": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:45.248 "assigned_rate_limits": { 00:14:45.248 "rw_ios_per_sec": 0, 00:14:45.248 "rw_mbytes_per_sec": 0, 00:14:45.248 "r_mbytes_per_sec": 0, 00:14:45.248 "w_mbytes_per_sec": 0 00:14:45.248 }, 00:14:45.248 "claimed": false, 00:14:45.248 "zoned": false, 00:14:45.248 "supported_io_types": { 00:14:45.248 "read": true, 00:14:45.248 "write": true, 00:14:45.248 "unmap": true, 00:14:45.248 "flush": false, 00:14:45.248 "reset": true, 00:14:45.248 "nvme_admin": false, 00:14:45.248 "nvme_io": false, 00:14:45.248 "nvme_io_md": false, 00:14:45.248 "write_zeroes": true, 00:14:45.248 "zcopy": false, 00:14:45.248 "get_zone_info": false, 00:14:45.248 "zone_management": false, 00:14:45.248 "zone_append": false, 00:14:45.248 "compare": false, 00:14:45.248 "compare_and_write": false, 00:14:45.248 "abort": false, 00:14:45.248 "seek_hole": true, 00:14:45.248 "seek_data": true, 00:14:45.248 "copy": false, 00:14:45.248 "nvme_iov_md": false 00:14:45.248 }, 00:14:45.248 "driver_specific": { 00:14:45.248 "lvol": { 00:14:45.248 "lvol_store_uuid": "b2377955-7aa2-4528-a4e6-7e355c5b1bd2", 00:14:45.248 "base_bdev": "nvme0n1", 00:14:45.248 "thin_provision": true, 00:14:45.248 "num_allocated_clusters": 0, 00:14:45.248 "snapshot": false, 00:14:45.248 "clone": false, 00:14:45.248 "esnap_clone": false 00:14:45.248 } 00:14:45.248 } 00:14:45.248 } 00:14:45.248 ]' 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:45.248 10:43:10 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:45.509 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 00:14:45.509 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:45.509 { 00:14:45.509 "name": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:45.509 "aliases": [ 00:14:45.509 "lvs/nvme0n1p0" 00:14:45.509 ], 00:14:45.509 "product_name": "Logical Volume", 00:14:45.509 "block_size": 4096, 00:14:45.509 "num_blocks": 26476544, 00:14:45.509 "uuid": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:45.509 "assigned_rate_limits": { 00:14:45.509 "rw_ios_per_sec": 0, 00:14:45.509 "rw_mbytes_per_sec": 0, 00:14:45.509 "r_mbytes_per_sec": 0, 00:14:45.509 "w_mbytes_per_sec": 0 00:14:45.509 }, 00:14:45.510 "claimed": false, 00:14:45.510 "zoned": false, 00:14:45.510 "supported_io_types": { 00:14:45.510 "read": true, 00:14:45.510 "write": true, 00:14:45.510 "unmap": true, 00:14:45.510 "flush": false, 00:14:45.510 "reset": true, 00:14:45.510 "nvme_admin": false, 00:14:45.510 "nvme_io": false, 00:14:45.510 "nvme_io_md": false, 00:14:45.510 "write_zeroes": true, 00:14:45.510 "zcopy": false, 00:14:45.510 "get_zone_info": false, 00:14:45.510 "zone_management": false, 00:14:45.510 "zone_append": false, 00:14:45.510 "compare": false, 00:14:45.510 "compare_and_write": false, 00:14:45.510 "abort": false, 00:14:45.510 "seek_hole": true, 00:14:45.510 "seek_data": true, 00:14:45.510 "copy": false, 00:14:45.510 "nvme_iov_md": false 00:14:45.510 }, 00:14:45.510 "driver_specific": { 00:14:45.510 "lvol": { 00:14:45.510 "lvol_store_uuid": "b2377955-7aa2-4528-a4e6-7e355c5b1bd2", 00:14:45.510 "base_bdev": "nvme0n1", 00:14:45.510 "thin_provision": true, 00:14:45.510 "num_allocated_clusters": 0, 00:14:45.510 "snapshot": false, 00:14:45.510 "clone": false, 00:14:45.510 "esnap_clone": false 00:14:45.510 } 00:14:45.510 } 00:14:45.510 } 00:14:45.510 ]' 00:14:45.510 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:45.510 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:45.510 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:45.772 10:43:11 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 63a47b48-fbab-4d52-8f84-eb6b6a2016a2 -c nvc0n1p0 --l2p_dram_limit 60 00:14:45.772 [2024-11-18 10:43:11.587969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.772 [2024-11-18 10:43:11.588584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:45.772 [2024-11-18 10:43:11.588672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:45.772 [2024-11-18 10:43:11.588719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.772 [2024-11-18 10:43:11.588832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.772 [2024-11-18 10:43:11.588883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:45.772 [2024-11-18 10:43:11.589020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:45.772 [2024-11-18 10:43:11.589087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.772 [2024-11-18 10:43:11.589170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:45.772 [2024-11-18 10:43:11.589982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:45.772 [2024-11-18 10:43:11.590191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.772 [2024-11-18 10:43:11.590323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:45.772 [2024-11-18 10:43:11.590702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:14:45.772 [2024-11-18 10:43:11.590846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.772 [2024-11-18 10:43:11.591287] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 41c69fdb-3b6d-46b1-821b-ea2f7de7a928 00:14:45.772 [2024-11-18 10:43:11.592910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.772 [2024-11-18 10:43:11.593083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:45.772 [2024-11-18 10:43:11.593153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:45.773 [2024-11-18 10:43:11.593168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.600463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.600611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:45.773 [2024-11-18 10:43:11.600719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.193 ms 00:14:45.773 [2024-11-18 10:43:11.600785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.600998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.601098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:45.773 [2024-11-18 10:43:11.601304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:14:45.773 [2024-11-18 10:43:11.601419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.601587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.601693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:45.773 [2024-11-18 10:43:11.601781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:14:45.773 [2024-11-18 10:43:11.601842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.601970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:45.773 [2024-11-18 10:43:11.606061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.606225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:45.773 [2024-11-18 10:43:11.606339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:14:45.773 [2024-11-18 10:43:11.606445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.606534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.606593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:45.773 [2024-11-18 10:43:11.606690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:14:45.773 [2024-11-18 10:43:11.606780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.606919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:45.773 [2024-11-18 10:43:11.607197] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:45.773 [2024-11-18 10:43:11.607276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:45.773 [2024-11-18 10:43:11.607326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:45.773 [2024-11-18 10:43:11.607372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:45.773 [2024-11-18 10:43:11.607412] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:45.773 [2024-11-18 10:43:11.607520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:45.773 [2024-11-18 10:43:11.607574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:45.773 [2024-11-18 10:43:11.607617] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:45.773 [2024-11-18 10:43:11.607661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:45.773 [2024-11-18 10:43:11.607704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.607781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:45.773 [2024-11-18 10:43:11.607834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:14:45.773 [2024-11-18 10:43:11.607873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.608002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.773 [2024-11-18 10:43:11.608098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:45.773 [2024-11-18 10:43:11.608182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:14:45.773 [2024-11-18 10:43:11.608262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.773 [2024-11-18 10:43:11.608483] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:45.773 [2024-11-18 10:43:11.608546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:45.773 [2024-11-18 10:43:11.608597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:45.773 [2024-11-18 10:43:11.608698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.608759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:45.773 [2024-11-18 10:43:11.608800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.608840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:45.773 [2024-11-18 10:43:11.608877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:45.773 [2024-11-18 10:43:11.608968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:45.773 [2024-11-18 10:43:11.609065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:45.773 [2024-11-18 10:43:11.609105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:45.773 [2024-11-18 10:43:11.609147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:45.773 [2024-11-18 10:43:11.609236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:45.773 [2024-11-18 10:43:11.609293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:45.773 [2024-11-18 10:43:11.609338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:45.773 [2024-11-18 10:43:11.609465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:45.773 [2024-11-18 10:43:11.609518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:45.773 [2024-11-18 10:43:11.609596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:45.773 [2024-11-18 10:43:11.609728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:45.773 [2024-11-18 10:43:11.609767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:45.773 [2024-11-18 10:43:11.609877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:45.773 [2024-11-18 10:43:11.609928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:45.773 [2024-11-18 10:43:11.609969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:45.773 [2024-11-18 10:43:11.610004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:45.773 [2024-11-18 10:43:11.610083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:45.773 [2024-11-18 10:43:11.610138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:45.773 [2024-11-18 10:43:11.610173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:45.773 [2024-11-18 10:43:11.610230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:45.773 [2024-11-18 10:43:11.610316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:45.773 [2024-11-18 10:43:11.610368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:45.773 [2024-11-18 10:43:11.610469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:45.773 [2024-11-18 10:43:11.610520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:45.773 [2024-11-18 10:43:11.610556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:45.773 [2024-11-18 10:43:11.610599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:45.773 [2024-11-18 10:43:11.610686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.610737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:45.773 [2024-11-18 10:43:11.610777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:45.773 [2024-11-18 10:43:11.610820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.610901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:45.773 [2024-11-18 10:43:11.610952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:45.773 [2024-11-18 10:43:11.610992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:45.773 [2024-11-18 10:43:11.611033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:45.773 [2024-11-18 10:43:11.611068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:45.773 [2024-11-18 10:43:11.611157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:45.773 [2024-11-18 10:43:11.611223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:45.773 [2024-11-18 10:43:11.611257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:45.773 [2024-11-18 10:43:11.611264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:45.773 [2024-11-18 10:43:11.611272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:45.773 [2024-11-18 10:43:11.611285] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:45.773 [2024-11-18 10:43:11.611298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:45.773 [2024-11-18 10:43:11.611307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:45.773 [2024-11-18 10:43:11.611317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:45.773 [2024-11-18 10:43:11.611324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:45.773 [2024-11-18 10:43:11.611333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:45.773 [2024-11-18 10:43:11.611341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:45.773 [2024-11-18 10:43:11.611350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:45.773 [2024-11-18 10:43:11.611357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:45.774 [2024-11-18 10:43:11.611366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:45.774 [2024-11-18 10:43:11.611373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:45.774 [2024-11-18 10:43:11.611384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:45.774 [2024-11-18 10:43:11.611425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:45.774 [2024-11-18 10:43:11.611435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:45.774 [2024-11-18 10:43:11.611455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:45.774 [2024-11-18 10:43:11.611463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:45.774 [2024-11-18 10:43:11.611471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:45.774 [2024-11-18 10:43:11.611481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.774 [2024-11-18 10:43:11.611490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:45.774 [2024-11-18 10:43:11.611498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:14:45.774 [2024-11-18 10:43:11.611507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.774 [2024-11-18 10:43:11.611608] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:45.774 [2024-11-18 10:43:11.611623] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:48.322 [2024-11-18 10:43:13.808830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.808902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:48.322 [2024-11-18 10:43:13.808922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2197.210 ms 00:14:48.322 [2024-11-18 10:43:13.808935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.837589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.837643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:48.322 [2024-11-18 10:43:13.837658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.431 ms 00:14:48.322 [2024-11-18 10:43:13.837670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.837812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.837825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:48.322 [2024-11-18 10:43:13.837834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:14:48.322 [2024-11-18 10:43:13.837846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.886377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.886757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:48.322 [2024-11-18 10:43:13.886817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.483 ms 00:14:48.322 [2024-11-18 10:43:13.886844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.886945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.886975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:48.322 [2024-11-18 10:43:13.886998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:48.322 [2024-11-18 10:43:13.887022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.887771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.887845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:48.322 [2024-11-18 10:43:13.887871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:14:48.322 [2024-11-18 10:43:13.887903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.888237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.888279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:48.322 [2024-11-18 10:43:13.888301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:14:48.322 [2024-11-18 10:43:13.888329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.907191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.907236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:48.322 [2024-11-18 10:43:13.907247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.785 ms 00:14:48.322 [2024-11-18 10:43:13.907258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.919627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:48.322 [2024-11-18 10:43:13.936892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.936937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:48.322 [2024-11-18 10:43:13.936951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.536 ms 00:14:48.322 [2024-11-18 10:43:13.936963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.992873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.993050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:48.322 [2024-11-18 10:43:13.993076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.870 ms 00:14:48.322 [2024-11-18 10:43:13.993085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:13.993345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.322 [2024-11-18 10:43:13.993359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:48.322 [2024-11-18 10:43:13.993372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:14:48.322 [2024-11-18 10:43:13.993380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.322 [2024-11-18 10:43:14.016543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.016581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:48.323 [2024-11-18 10:43:14.016594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.082 ms 00:14:48.323 [2024-11-18 10:43:14.016603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.039540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.039676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:48.323 [2024-11-18 10:43:14.039696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.892 ms 00:14:48.323 [2024-11-18 10:43:14.039703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.040311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.040329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:48.323 [2024-11-18 10:43:14.040340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:14:48.323 [2024-11-18 10:43:14.040348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.110283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.110418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:48.323 [2024-11-18 10:43:14.110442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.880 ms 00:14:48.323 [2024-11-18 10:43:14.110454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.135376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.135412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:48.323 [2024-11-18 10:43:14.135427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:14:48.323 [2024-11-18 10:43:14.135435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.158154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.158302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:48.323 [2024-11-18 10:43:14.158322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.684 ms 00:14:48.323 [2024-11-18 10:43:14.158330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.183155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.183188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:48.323 [2024-11-18 10:43:14.183201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:14:48.323 [2024-11-18 10:43:14.183220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.183256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.183266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:48.323 [2024-11-18 10:43:14.183278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:48.323 [2024-11-18 10:43:14.183289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.183388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.323 [2024-11-18 10:43:14.183405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:48.323 [2024-11-18 10:43:14.183418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:14:48.323 [2024-11-18 10:43:14.183426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.323 [2024-11-18 10:43:14.184534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2596.044 ms, result 0 00:14:48.323 { 00:14:48.323 "name": "ftl0", 00:14:48.323 "uuid": "41c69fdb-3b6d-46b1-821b-ea2f7de7a928" 00:14:48.323 } 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.585 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:48.846 [ 00:14:48.846 { 00:14:48.846 "name": "ftl0", 00:14:48.846 "aliases": [ 00:14:48.846 "41c69fdb-3b6d-46b1-821b-ea2f7de7a928" 00:14:48.846 ], 00:14:48.846 "product_name": "FTL disk", 00:14:48.846 "block_size": 4096, 00:14:48.846 "num_blocks": 20971520, 00:14:48.846 "uuid": "41c69fdb-3b6d-46b1-821b-ea2f7de7a928", 00:14:48.846 "assigned_rate_limits": { 00:14:48.846 "rw_ios_per_sec": 0, 00:14:48.846 "rw_mbytes_per_sec": 0, 00:14:48.846 "r_mbytes_per_sec": 0, 00:14:48.846 "w_mbytes_per_sec": 0 00:14:48.846 }, 00:14:48.846 "claimed": false, 00:14:48.846 "zoned": false, 00:14:48.846 "supported_io_types": { 00:14:48.846 "read": true, 00:14:48.846 "write": true, 00:14:48.846 "unmap": true, 00:14:48.846 "flush": true, 00:14:48.846 "reset": false, 00:14:48.846 "nvme_admin": false, 00:14:48.846 "nvme_io": false, 00:14:48.846 "nvme_io_md": false, 00:14:48.846 "write_zeroes": true, 00:14:48.846 "zcopy": false, 00:14:48.846 "get_zone_info": false, 00:14:48.846 "zone_management": false, 00:14:48.846 "zone_append": false, 00:14:48.846 "compare": false, 00:14:48.846 "compare_and_write": false, 00:14:48.846 "abort": false, 00:14:48.846 "seek_hole": false, 00:14:48.846 "seek_data": false, 00:14:48.846 "copy": false, 00:14:48.846 "nvme_iov_md": false 00:14:48.846 }, 00:14:48.846 "driver_specific": { 00:14:48.846 "ftl": { 00:14:48.846 "base_bdev": "63a47b48-fbab-4d52-8f84-eb6b6a2016a2", 00:14:48.846 "cache": "nvc0n1p0" 00:14:48.846 } 00:14:48.846 } 00:14:48.846 } 00:14:48.846 ] 00:14:48.846 10:43:14 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:14:48.846 10:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:48.846 10:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:49.105 10:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:49.105 10:43:14 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:49.366 [2024-11-18 10:43:15.005256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.005419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:49.366 [2024-11-18 10:43:15.005437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:49.366 [2024-11-18 10:43:15.005446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.005477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:49.366 [2024-11-18 10:43:15.007695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.007723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:49.366 [2024-11-18 10:43:15.007733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:14:49.366 [2024-11-18 10:43:15.007741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.008135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.008150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:49.366 [2024-11-18 10:43:15.008159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:14:49.366 [2024-11-18 10:43:15.008165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.010636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.010655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:49.366 [2024-11-18 10:43:15.010664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:14:49.366 [2024-11-18 10:43:15.010670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.015491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.015514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:49.366 [2024-11-18 10:43:15.015524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:14:49.366 [2024-11-18 10:43:15.015530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.034760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.034788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:49.366 [2024-11-18 10:43:15.034801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.160 ms 00:14:49.366 [2024-11-18 10:43:15.034807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.047554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.047582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:49.366 [2024-11-18 10:43:15.047594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.693 ms 00:14:49.366 [2024-11-18 10:43:15.047603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.047747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.047755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:49.366 [2024-11-18 10:43:15.047765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:14:49.366 [2024-11-18 10:43:15.047772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.065959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.065985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:49.366 [2024-11-18 10:43:15.065995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.169 ms 00:14:49.366 [2024-11-18 10:43:15.066001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.083367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.083505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:49.366 [2024-11-18 10:43:15.083521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.326 ms 00:14:49.366 [2024-11-18 10:43:15.083527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.100884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.100910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:49.366 [2024-11-18 10:43:15.100920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.320 ms 00:14:49.366 [2024-11-18 10:43:15.100926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.366 [2024-11-18 10:43:15.118105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.366 [2024-11-18 10:43:15.118130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:49.367 [2024-11-18 10:43:15.118140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.088 ms 00:14:49.367 [2024-11-18 10:43:15.118146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.367 [2024-11-18 10:43:15.118181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:49.367 [2024-11-18 10:43:15.118193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:49.367 [2024-11-18 10:43:15.118745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:49.368 [2024-11-18 10:43:15.118920] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:49.368 [2024-11-18 10:43:15.118927] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 41c69fdb-3b6d-46b1-821b-ea2f7de7a928 00:14:49.368 [2024-11-18 10:43:15.118933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:49.368 [2024-11-18 10:43:15.118942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:49.368 [2024-11-18 10:43:15.118948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:49.368 [2024-11-18 10:43:15.118958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:49.368 [2024-11-18 10:43:15.118963] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:49.368 [2024-11-18 10:43:15.118971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:49.368 [2024-11-18 10:43:15.118977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:49.368 [2024-11-18 10:43:15.118983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:49.368 [2024-11-18 10:43:15.118988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:49.368 [2024-11-18 10:43:15.118995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.368 [2024-11-18 10:43:15.119001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:49.368 [2024-11-18 10:43:15.119010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:14:49.368 [2024-11-18 10:43:15.119016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.129020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.368 [2024-11-18 10:43:15.129047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:49.368 [2024-11-18 10:43:15.129057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.968 ms 00:14:49.368 [2024-11-18 10:43:15.129064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.129396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:49.368 [2024-11-18 10:43:15.129404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:49.368 [2024-11-18 10:43:15.129412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:14:49.368 [2024-11-18 10:43:15.129418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.165948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.368 [2024-11-18 10:43:15.166088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:49.368 [2024-11-18 10:43:15.166105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.368 [2024-11-18 10:43:15.166112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.166174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.368 [2024-11-18 10:43:15.166180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:49.368 [2024-11-18 10:43:15.166189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.368 [2024-11-18 10:43:15.166195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.166293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.368 [2024-11-18 10:43:15.166302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:49.368 [2024-11-18 10:43:15.166313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.368 [2024-11-18 10:43:15.166319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.166344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.368 [2024-11-18 10:43:15.166352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:49.368 [2024-11-18 10:43:15.166361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.368 [2024-11-18 10:43:15.166367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.368 [2024-11-18 10:43:15.233348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.368 [2024-11-18 10:43:15.233392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:49.368 [2024-11-18 10:43:15.233404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.368 [2024-11-18 10:43:15.233412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:49.628 [2024-11-18 10:43:15.285312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:49.628 [2024-11-18 10:43:15.285438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:49.628 [2024-11-18 10:43:15.285525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:49.628 [2024-11-18 10:43:15.285646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:49.628 [2024-11-18 10:43:15.285719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:49.628 [2024-11-18 10:43:15.285784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.285845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:49.628 [2024-11-18 10:43:15.285853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:49.628 [2024-11-18 10:43:15.285860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:49.628 [2024-11-18 10:43:15.285866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:49.628 [2024-11-18 10:43:15.286008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.723 ms, result 0 00:14:49.628 true 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72288 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72288 ']' 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72288 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72288 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72288' 00:14:49.628 killing process with pid 72288 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72288 00:14:49.628 10:43:15 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72288 00:14:56.201 10:43:21 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.202 10:43:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:56.202 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:56.202 fio-3.35 00:14:56.202 Starting 1 thread 00:15:01.492 00:15:01.492 test: (groupid=0, jobs=1): err= 0: pid=72461: Mon Nov 18 10:43:26 2024 00:15:01.492 read: IOPS=1044, BW=69.4MiB/s (72.7MB/s)(255MiB/3670msec) 00:15:01.492 slat (nsec): min=2906, max=24325, avg=4312.27, stdev=2016.04 00:15:01.492 clat (usec): min=231, max=1368, avg=430.86, stdev=167.54 00:15:01.492 lat (usec): min=235, max=1378, avg=435.18, stdev=168.32 00:15:01.492 clat percentiles (usec): 00:15:01.492 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 314], 00:15:01.492 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 379], 60.00th=[ 404], 00:15:01.492 | 70.00th=[ 465], 80.00th=[ 519], 90.00th=[ 635], 95.00th=[ 848], 00:15:01.492 | 99.00th=[ 971], 99.50th=[ 1074], 99.90th=[ 1287], 99.95th=[ 1303], 00:15:01.492 | 99.99th=[ 1369] 00:15:01.492 write: IOPS=1051, BW=69.8MiB/s (73.2MB/s)(256MiB/3667msec); 0 zone resets 00:15:01.492 slat (nsec): min=13382, max=62393, avg=19138.34, stdev=4913.40 00:15:01.492 clat (usec): min=253, max=1806, avg=486.35, stdev=199.92 00:15:01.492 lat (usec): min=273, max=1841, avg=505.49, stdev=202.40 00:15:01.492 clat percentiles (usec): 00:15:01.492 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 338], 20.00th=[ 338], 00:15:01.492 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 420], 60.00th=[ 474], 00:15:01.492 | 70.00th=[ 537], 80.00th=[ 603], 90.00th=[ 799], 95.00th=[ 955], 00:15:01.492 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1352], 99.95th=[ 1696], 00:15:01.492 | 99.99th=[ 1811] 00:15:01.492 bw ( KiB/s): min=42568, max=85408, per=98.60%, avg=70506.29, stdev=15687.60, samples=7 00:15:01.492 iops : min= 626, max= 1256, avg=1036.86, stdev=230.70, samples=7 00:15:01.492 lat (usec) : 250=0.05%, 500=70.95%, 750=19.73%, 1000=7.45% 00:15:01.492 lat (msec) : 2=1.82% 00:15:01.492 cpu : usr=99.21%, sys=0.08%, ctx=11, majf=0, minf=1169 00:15:01.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.492 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.492 00:15:01.492 Run status group 0 (all jobs): 00:15:01.492 READ: bw=69.4MiB/s (72.7MB/s), 69.4MiB/s-69.4MiB/s (72.7MB/s-72.7MB/s), io=255MiB (267MB), run=3670-3670msec 00:15:01.492 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=256MiB (269MB), run=3667-3667msec 00:15:02.154 ----------------------------------------------------- 00:15:02.154 Suppressions used: 00:15:02.154 count bytes template 00:15:02.154 1 5 /usr/src/fio/parse.c 00:15:02.154 1 8 libtcmalloc_minimal.so 00:15:02.154 1 904 libcrypto.so 00:15:02.154 ----------------------------------------------------- 00:15:02.154 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:02.414 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:02.415 10:43:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:02.674 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:02.674 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:02.674 fio-3.35 00:15:02.674 Starting 2 threads 00:15:24.621 00:15:24.621 first_half: (groupid=0, jobs=1): err= 0: pid=72558: Mon Nov 18 10:43:50 2024 00:15:24.621 read: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(256MiB/21005msec) 00:15:24.621 slat (nsec): min=2940, max=54747, avg=3743.62, stdev=745.91 00:15:24.621 clat (usec): min=456, max=260553, avg=34589.96, stdev=21689.22 00:15:24.621 lat (usec): min=459, max=260556, avg=34593.70, stdev=21689.30 00:15:24.621 clat percentiles (msec): 00:15:24.621 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:15:24.621 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:15:24.621 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 66], 00:15:24.621 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 203], 99.95th=[ 234], 00:15:24.621 | 99.99th=[ 255] 00:15:24.621 write: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(256MiB/20981msec); 0 zone resets 00:15:24.621 slat (usec): min=3, max=268, avg= 5.06, stdev= 2.39 00:15:24.621 clat (usec): min=355, max=42128, avg=6444.93, stdev=6611.38 00:15:24.621 lat (usec): min=361, max=42133, avg=6449.99, stdev=6611.56 00:15:24.621 clat percentiles (usec): 00:15:24.621 | 1.00th=[ 693], 5.00th=[ 857], 10.00th=[ 1237], 20.00th=[ 2671], 00:15:24.621 | 30.00th=[ 3294], 40.00th=[ 4015], 50.00th=[ 4555], 60.00th=[ 5014], 00:15:24.621 | 70.00th=[ 5604], 80.00th=[ 6783], 90.00th=[16057], 95.00th=[23725], 00:15:24.621 | 99.00th=[30540], 99.50th=[32900], 99.90th=[40633], 99.95th=[41157], 00:15:24.621 | 99.99th=[42206] 00:15:24.621 bw ( KiB/s): min= 248, max=55744, per=99.20%, avg=24788.48, stdev=15091.75, samples=21 00:15:24.621 iops : min= 62, max=13936, avg=6197.10, stdev=3772.93, samples=21 00:15:24.621 lat (usec) : 500=0.02%, 750=1.16%, 1000=2.16% 00:15:24.622 lat (msec) : 2=3.95%, 4=12.62%, 10=23.68%, 20=4.60%, 50=48.66% 00:15:24.622 lat (msec) : 100=1.49%, 250=1.66%, 500=0.01% 00:15:24.622 cpu : usr=99.45%, sys=0.08%, ctx=257, majf=0, minf=5550 00:15:24.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:24.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.622 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.622 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.622 second_half: (groupid=0, jobs=1): err= 0: pid=72559: Mon Nov 18 10:43:50 2024 00:15:24.622 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(256MiB/20859msec) 00:15:24.622 slat (usec): min=3, max=197, avg= 4.11, stdev= 1.37 00:15:24.622 clat (msec): min=9, max=175, avg=34.85, stdev=18.98 00:15:24.622 lat (msec): min=9, max=175, avg=34.86, stdev=18.98 00:15:24.622 clat percentiles (msec): 00:15:24.622 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:15:24.622 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:15:24.622 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 64], 00:15:24.622 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 163], 00:15:24.622 | 99.99th=[ 167] 00:15:24.622 write: IOPS=3381, BW=13.2MiB/s (13.9MB/s)(256MiB/19380msec); 0 zone resets 00:15:24.622 slat (usec): min=3, max=2524, avg= 5.58, stdev=13.48 00:15:24.622 clat (usec): min=359, max=40128, avg=5898.36, stdev=4712.74 00:15:24.622 lat (usec): min=370, max=40134, avg=5903.94, stdev=4713.31 00:15:24.622 clat percentiles (usec): 00:15:24.622 | 1.00th=[ 938], 5.00th=[ 1696], 10.00th=[ 2311], 20.00th=[ 2900], 00:15:24.622 | 30.00th=[ 3490], 40.00th=[ 4178], 50.00th=[ 4686], 60.00th=[ 5211], 00:15:24.622 | 70.00th=[ 5669], 80.00th=[ 6718], 90.00th=[11338], 95.00th=[17171], 00:15:24.622 | 99.00th=[24511], 99.50th=[26346], 99.90th=[33817], 99.95th=[36439], 00:15:24.622 | 99.99th=[39060] 00:15:24.622 bw ( KiB/s): min= 2344, max=47576, per=99.91%, avg=24966.10, stdev=13214.33, samples=21 00:15:24.622 iops : min= 586, max=11894, avg=6241.52, stdev=3303.58, samples=21 00:15:24.622 lat (usec) : 500=0.04%, 750=0.15%, 1000=0.44% 00:15:24.622 lat (msec) : 2=2.74%, 4=15.25%, 10=25.01%, 20=5.08%, 50=48.16% 00:15:24.622 lat (msec) : 100=1.56%, 250=1.56% 00:15:24.622 cpu : usr=99.27%, sys=0.12%, ctx=40, majf=0, minf=5563 00:15:24.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:24.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.622 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.622 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.622 00:15:24.622 Run status group 0 (all jobs): 00:15:24.622 READ: bw=24.4MiB/s (25.5MB/s), 12.2MiB/s-12.3MiB/s (12.8MB/s-12.9MB/s), io=512MiB (536MB), run=20859-21005msec 00:15:24.622 WRITE: bw=24.4MiB/s (25.6MB/s), 12.2MiB/s-13.2MiB/s (12.8MB/s-13.9MB/s), io=512MiB (537MB), run=19380-20981msec 00:15:27.167 ----------------------------------------------------- 00:15:27.167 Suppressions used: 00:15:27.168 count bytes template 00:15:27.168 2 10 /usr/src/fio/parse.c 00:15:27.168 3 288 /usr/src/fio/iolog.c 00:15:27.168 1 8 libtcmalloc_minimal.so 00:15:27.168 1 904 libcrypto.so 00:15:27.168 ----------------------------------------------------- 00:15:27.168 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.168 10:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:27.429 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:27.429 fio-3.35 00:15:27.429 Starting 1 thread 00:15:45.536 00:15:45.536 test: (groupid=0, jobs=1): err= 0: pid=72851: Mon Nov 18 10:44:09 2024 00:15:45.536 read: IOPS=7209, BW=28.2MiB/s (29.5MB/s)(255MiB/9044msec) 00:15:45.536 slat (nsec): min=2960, max=26296, avg=4548.71, stdev=1069.43 00:15:45.536 clat (usec): min=1010, max=37147, avg=17747.16, stdev=3244.48 00:15:45.536 lat (usec): min=1018, max=37152, avg=17751.70, stdev=3244.53 00:15:45.536 clat percentiles (usec): 00:15:45.536 | 1.00th=[14091], 5.00th=[14353], 10.00th=[14484], 20.00th=[14615], 00:15:45.536 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16909], 60.00th=[18482], 00:15:45.536 | 70.00th=[19268], 80.00th=[20317], 90.00th=[22152], 95.00th=[23725], 00:15:45.536 | 99.00th=[27132], 99.50th=[28181], 99.90th=[30016], 99.95th=[33162], 00:15:45.536 | 99.99th=[36439] 00:15:45.536 write: IOPS=9727, BW=38.0MiB/s (39.8MB/s)(256MiB/6737msec); 0 zone resets 00:15:45.536 slat (usec): min=4, max=654, avg= 7.48, stdev= 6.78 00:15:45.536 clat (usec): min=478, max=81163, avg=13110.82, stdev=16439.26 00:15:45.536 lat (usec): min=483, max=81170, avg=13118.30, stdev=16439.72 00:15:45.536 clat percentiles (usec): 00:15:45.536 | 1.00th=[ 742], 5.00th=[ 938], 10.00th=[ 1123], 20.00th=[ 1696], 00:15:45.536 | 30.00th=[ 2180], 40.00th=[ 3130], 50.00th=[ 7373], 60.00th=[11338], 00:15:45.536 | 70.00th=[13960], 80.00th=[16450], 90.00th=[41681], 95.00th=[57410], 00:15:45.537 | 99.00th=[65799], 99.50th=[66847], 99.90th=[70779], 99.95th=[71828], 00:15:45.537 | 99.99th=[78119] 00:15:45.537 bw ( KiB/s): min=14280, max=64888, per=96.23%, avg=37445.21, stdev=13472.09, samples=14 00:15:45.537 iops : min= 3570, max=16222, avg=9361.29, stdev=3368.04, samples=14 00:15:45.537 lat (usec) : 500=0.01%, 750=0.58%, 1000=2.70% 00:15:45.537 lat (msec) : 2=10.07%, 4=7.70%, 10=6.93%, 20=52.13%, 50=16.22% 00:15:45.537 lat (msec) : 100=3.67% 00:15:45.537 cpu : usr=98.89%, sys=0.27%, ctx=30, majf=0, minf=5565 00:15:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.537 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.537 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.537 00:15:45.537 Run status group 0 (all jobs): 00:15:45.537 READ: bw=28.2MiB/s (29.5MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=255MiB (267MB), run=9044-9044msec 00:15:45.537 WRITE: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=256MiB (268MB), run=6737-6737msec 00:15:45.798 ----------------------------------------------------- 00:15:45.798 Suppressions used: 00:15:45.798 count bytes template 00:15:45.798 1 5 /usr/src/fio/parse.c 00:15:45.798 2 192 /usr/src/fio/iolog.c 00:15:45.798 1 8 libtcmalloc_minimal.so 00:15:45.798 1 904 libcrypto.so 00:15:45.798 ----------------------------------------------------- 00:15:45.798 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:45.798 Remove shared memory files 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57087 /dev/shm/spdk_tgt_trace.pid71197 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:45.798 ************************************ 00:15:45.798 END TEST ftl_fio_basic 00:15:45.798 ************************************ 00:15:45.798 00:15:45.798 real 1m3.700s 00:15:45.798 user 2m5.978s 00:15:45.798 sys 0m14.358s 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.798 10:44:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 10:44:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:46.060 10:44:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:46.060 10:44:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.060 10:44:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:46.060 ************************************ 00:15:46.060 START TEST ftl_bdevperf 00:15:46.060 ************************************ 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:46.061 * Looking for test storage... 00:15:46.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.061 --rc genhtml_branch_coverage=1 00:15:46.061 --rc genhtml_function_coverage=1 00:15:46.061 --rc genhtml_legend=1 00:15:46.061 --rc geninfo_all_blocks=1 00:15:46.061 --rc geninfo_unexecuted_blocks=1 00:15:46.061 00:15:46.061 ' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.061 --rc genhtml_branch_coverage=1 00:15:46.061 --rc genhtml_function_coverage=1 00:15:46.061 --rc genhtml_legend=1 00:15:46.061 --rc geninfo_all_blocks=1 00:15:46.061 --rc geninfo_unexecuted_blocks=1 00:15:46.061 00:15:46.061 ' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.061 --rc genhtml_branch_coverage=1 00:15:46.061 --rc genhtml_function_coverage=1 00:15:46.061 --rc genhtml_legend=1 00:15:46.061 --rc geninfo_all_blocks=1 00:15:46.061 --rc geninfo_unexecuted_blocks=1 00:15:46.061 00:15:46.061 ' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.061 --rc genhtml_branch_coverage=1 00:15:46.061 --rc genhtml_function_coverage=1 00:15:46.061 --rc genhtml_legend=1 00:15:46.061 --rc geninfo_all_blocks=1 00:15:46.061 --rc geninfo_unexecuted_blocks=1 00:15:46.061 00:15:46.061 ' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73115 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73115 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73115 ']' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.061 10:44:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:46.322 [2024-11-18 10:44:11.987244] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:46.322 [2024-11-18 10:44:11.987554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73115 ] 00:15:46.322 [2024-11-18 10:44:12.154790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.583 [2024-11-18 10:44:12.276116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:47.156 10:44:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:47.417 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:47.679 { 00:15:47.679 "name": "nvme0n1", 00:15:47.679 "aliases": [ 00:15:47.679 "475169c6-93a6-4ff7-9cce-e08b53bc0e3b" 00:15:47.679 ], 00:15:47.679 "product_name": "NVMe disk", 00:15:47.679 "block_size": 4096, 00:15:47.679 "num_blocks": 1310720, 00:15:47.679 "uuid": "475169c6-93a6-4ff7-9cce-e08b53bc0e3b", 00:15:47.679 "numa_id": -1, 00:15:47.679 "assigned_rate_limits": { 00:15:47.679 "rw_ios_per_sec": 0, 00:15:47.679 "rw_mbytes_per_sec": 0, 00:15:47.679 "r_mbytes_per_sec": 0, 00:15:47.679 "w_mbytes_per_sec": 0 00:15:47.679 }, 00:15:47.679 "claimed": true, 00:15:47.679 "claim_type": "read_many_write_one", 00:15:47.679 "zoned": false, 00:15:47.679 "supported_io_types": { 00:15:47.679 "read": true, 00:15:47.679 "write": true, 00:15:47.679 "unmap": true, 00:15:47.679 "flush": true, 00:15:47.679 "reset": true, 00:15:47.679 "nvme_admin": true, 00:15:47.679 "nvme_io": true, 00:15:47.679 "nvme_io_md": false, 00:15:47.679 "write_zeroes": true, 00:15:47.679 "zcopy": false, 00:15:47.679 "get_zone_info": false, 00:15:47.679 "zone_management": false, 00:15:47.679 "zone_append": false, 00:15:47.679 "compare": true, 00:15:47.679 "compare_and_write": false, 00:15:47.679 "abort": true, 00:15:47.679 "seek_hole": false, 00:15:47.679 "seek_data": false, 00:15:47.679 "copy": true, 00:15:47.679 "nvme_iov_md": false 00:15:47.679 }, 00:15:47.679 "driver_specific": { 00:15:47.679 "nvme": [ 00:15:47.679 { 00:15:47.679 "pci_address": "0000:00:11.0", 00:15:47.679 "trid": { 00:15:47.679 "trtype": "PCIe", 00:15:47.679 "traddr": "0000:00:11.0" 00:15:47.679 }, 00:15:47.679 "ctrlr_data": { 00:15:47.679 "cntlid": 0, 00:15:47.679 "vendor_id": "0x1b36", 00:15:47.679 "model_number": "QEMU NVMe Ctrl", 00:15:47.679 "serial_number": "12341", 00:15:47.679 "firmware_revision": "8.0.0", 00:15:47.679 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:47.679 "oacs": { 00:15:47.679 "security": 0, 00:15:47.679 "format": 1, 00:15:47.679 "firmware": 0, 00:15:47.679 "ns_manage": 1 00:15:47.679 }, 00:15:47.679 "multi_ctrlr": false, 00:15:47.679 "ana_reporting": false 00:15:47.679 }, 00:15:47.679 "vs": { 00:15:47.679 "nvme_version": "1.4" 00:15:47.679 }, 00:15:47.679 "ns_data": { 00:15:47.679 "id": 1, 00:15:47.679 "can_share": false 00:15:47.679 } 00:15:47.679 } 00:15:47.679 ], 00:15:47.679 "mp_policy": "active_passive" 00:15:47.679 } 00:15:47.679 } 00:15:47.679 ]' 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:47.679 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:47.939 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b2377955-7aa2-4528-a4e6-7e355c5b1bd2 00:15:47.939 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:47.939 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2377955-7aa2-4528-a4e6-7e355c5b1bd2 00:15:48.197 10:44:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:48.197 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f15d1bb0-32bd-4e5d-baf0-ebcec62c951e 00:15:48.197 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f15d1bb0-32bd-4e5d-baf0-ebcec62c951e 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:48.457 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:48.718 { 00:15:48.718 "name": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:48.718 "aliases": [ 00:15:48.718 "lvs/nvme0n1p0" 00:15:48.718 ], 00:15:48.718 "product_name": "Logical Volume", 00:15:48.718 "block_size": 4096, 00:15:48.718 "num_blocks": 26476544, 00:15:48.718 "uuid": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:48.718 "assigned_rate_limits": { 00:15:48.718 "rw_ios_per_sec": 0, 00:15:48.718 "rw_mbytes_per_sec": 0, 00:15:48.718 "r_mbytes_per_sec": 0, 00:15:48.718 "w_mbytes_per_sec": 0 00:15:48.718 }, 00:15:48.718 "claimed": false, 00:15:48.718 "zoned": false, 00:15:48.718 "supported_io_types": { 00:15:48.718 "read": true, 00:15:48.718 "write": true, 00:15:48.718 "unmap": true, 00:15:48.718 "flush": false, 00:15:48.718 "reset": true, 00:15:48.718 "nvme_admin": false, 00:15:48.718 "nvme_io": false, 00:15:48.718 "nvme_io_md": false, 00:15:48.718 "write_zeroes": true, 00:15:48.718 "zcopy": false, 00:15:48.718 "get_zone_info": false, 00:15:48.718 "zone_management": false, 00:15:48.718 "zone_append": false, 00:15:48.718 "compare": false, 00:15:48.718 "compare_and_write": false, 00:15:48.718 "abort": false, 00:15:48.718 "seek_hole": true, 00:15:48.718 "seek_data": true, 00:15:48.718 "copy": false, 00:15:48.718 "nvme_iov_md": false 00:15:48.718 }, 00:15:48.718 "driver_specific": { 00:15:48.718 "lvol": { 00:15:48.718 "lvol_store_uuid": "f15d1bb0-32bd-4e5d-baf0-ebcec62c951e", 00:15:48.718 "base_bdev": "nvme0n1", 00:15:48.718 "thin_provision": true, 00:15:48.718 "num_allocated_clusters": 0, 00:15:48.718 "snapshot": false, 00:15:48.718 "clone": false, 00:15:48.718 "esnap_clone": false 00:15:48.718 } 00:15:48.718 } 00:15:48.718 } 00:15:48.718 ]' 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:48.718 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:48.979 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0c43236e-8978-4136-a1c9-514834434b4c 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:48.980 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c43236e-8978-4136-a1c9-514834434b4c 00:15:49.241 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:49.241 { 00:15:49.241 "name": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:49.241 "aliases": [ 00:15:49.241 "lvs/nvme0n1p0" 00:15:49.241 ], 00:15:49.241 "product_name": "Logical Volume", 00:15:49.241 "block_size": 4096, 00:15:49.241 "num_blocks": 26476544, 00:15:49.241 "uuid": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:49.241 "assigned_rate_limits": { 00:15:49.241 "rw_ios_per_sec": 0, 00:15:49.241 "rw_mbytes_per_sec": 0, 00:15:49.241 "r_mbytes_per_sec": 0, 00:15:49.241 "w_mbytes_per_sec": 0 00:15:49.241 }, 00:15:49.241 "claimed": false, 00:15:49.241 "zoned": false, 00:15:49.241 "supported_io_types": { 00:15:49.241 "read": true, 00:15:49.241 "write": true, 00:15:49.241 "unmap": true, 00:15:49.241 "flush": false, 00:15:49.241 "reset": true, 00:15:49.241 "nvme_admin": false, 00:15:49.241 "nvme_io": false, 00:15:49.241 "nvme_io_md": false, 00:15:49.241 "write_zeroes": true, 00:15:49.241 "zcopy": false, 00:15:49.241 "get_zone_info": false, 00:15:49.241 "zone_management": false, 00:15:49.241 "zone_append": false, 00:15:49.241 "compare": false, 00:15:49.241 "compare_and_write": false, 00:15:49.241 "abort": false, 00:15:49.241 "seek_hole": true, 00:15:49.241 "seek_data": true, 00:15:49.241 "copy": false, 00:15:49.241 "nvme_iov_md": false 00:15:49.241 }, 00:15:49.241 "driver_specific": { 00:15:49.241 "lvol": { 00:15:49.241 "lvol_store_uuid": "f15d1bb0-32bd-4e5d-baf0-ebcec62c951e", 00:15:49.241 "base_bdev": "nvme0n1", 00:15:49.241 "thin_provision": true, 00:15:49.241 "num_allocated_clusters": 0, 00:15:49.241 "snapshot": false, 00:15:49.241 "clone": false, 00:15:49.241 "esnap_clone": false 00:15:49.241 } 00:15:49.241 } 00:15:49.241 } 00:15:49.241 ]' 00:15:49.241 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:49.241 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:49.241 10:44:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:49.241 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:49.241 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:49.241 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:49.242 10:44:15 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:49.242 10:44:15 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:49.502 10:44:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:49.502 10:44:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0c43236e-8978-4136-a1c9-514834434b4c 00:15:49.503 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0c43236e-8978-4136-a1c9-514834434b4c 00:15:49.503 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:49.503 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:49.503 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:49.503 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c43236e-8978-4136-a1c9-514834434b4c 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:49.763 { 00:15:49.763 "name": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:49.763 "aliases": [ 00:15:49.763 "lvs/nvme0n1p0" 00:15:49.763 ], 00:15:49.763 "product_name": "Logical Volume", 00:15:49.763 "block_size": 4096, 00:15:49.763 "num_blocks": 26476544, 00:15:49.763 "uuid": "0c43236e-8978-4136-a1c9-514834434b4c", 00:15:49.763 "assigned_rate_limits": { 00:15:49.763 "rw_ios_per_sec": 0, 00:15:49.763 "rw_mbytes_per_sec": 0, 00:15:49.763 "r_mbytes_per_sec": 0, 00:15:49.763 "w_mbytes_per_sec": 0 00:15:49.763 }, 00:15:49.763 "claimed": false, 00:15:49.763 "zoned": false, 00:15:49.763 "supported_io_types": { 00:15:49.763 "read": true, 00:15:49.763 "write": true, 00:15:49.763 "unmap": true, 00:15:49.763 "flush": false, 00:15:49.763 "reset": true, 00:15:49.763 "nvme_admin": false, 00:15:49.763 "nvme_io": false, 00:15:49.763 "nvme_io_md": false, 00:15:49.763 "write_zeroes": true, 00:15:49.763 "zcopy": false, 00:15:49.763 "get_zone_info": false, 00:15:49.763 "zone_management": false, 00:15:49.763 "zone_append": false, 00:15:49.763 "compare": false, 00:15:49.763 "compare_and_write": false, 00:15:49.763 "abort": false, 00:15:49.763 "seek_hole": true, 00:15:49.763 "seek_data": true, 00:15:49.763 "copy": false, 00:15:49.763 "nvme_iov_md": false 00:15:49.763 }, 00:15:49.763 "driver_specific": { 00:15:49.763 "lvol": { 00:15:49.763 "lvol_store_uuid": "f15d1bb0-32bd-4e5d-baf0-ebcec62c951e", 00:15:49.763 "base_bdev": "nvme0n1", 00:15:49.763 "thin_provision": true, 00:15:49.763 "num_allocated_clusters": 0, 00:15:49.763 "snapshot": false, 00:15:49.763 "clone": false, 00:15:49.763 "esnap_clone": false 00:15:49.763 } 00:15:49.763 } 00:15:49.763 } 00:15:49.763 ]' 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:49.763 10:44:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0c43236e-8978-4136-a1c9-514834434b4c -c nvc0n1p0 --l2p_dram_limit 20 00:15:50.051 [2024-11-18 10:44:15.715942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.715983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:50.051 [2024-11-18 10:44:15.715994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:50.051 [2024-11-18 10:44:15.716003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.051 [2024-11-18 10:44:15.716047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.716058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:50.051 [2024-11-18 10:44:15.716065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:50.051 [2024-11-18 10:44:15.716072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.051 [2024-11-18 10:44:15.716085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:50.051 [2024-11-18 10:44:15.716740] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:50.051 [2024-11-18 10:44:15.716758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.716766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:50.051 [2024-11-18 10:44:15.716773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:15:50.051 [2024-11-18 10:44:15.716780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.051 [2024-11-18 10:44:15.716803] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b45ea8d2-858a-4a3d-8018-262b7ad21dfe 00:15:50.051 [2024-11-18 10:44:15.717761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.717925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:50.051 [2024-11-18 10:44:15.717944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:50.051 [2024-11-18 10:44:15.717952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.051 [2024-11-18 10:44:15.722789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.722873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:50.051 [2024-11-18 10:44:15.722926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:15:50.051 [2024-11-18 10:44:15.722944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.051 [2024-11-18 10:44:15.723023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.051 [2024-11-18 10:44:15.723094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:50.051 [2024-11-18 10:44:15.723164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:50.052 [2024-11-18 10:44:15.723179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.723238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.052 [2024-11-18 10:44:15.723258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:50.052 [2024-11-18 10:44:15.723275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:50.052 [2024-11-18 10:44:15.723290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.723378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:50.052 [2024-11-18 10:44:15.726321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.052 [2024-11-18 10:44:15.726415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:50.052 [2024-11-18 10:44:15.726463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:15:50.052 [2024-11-18 10:44:15.726483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.726520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.052 [2024-11-18 10:44:15.726537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:50.052 [2024-11-18 10:44:15.726584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:50.052 [2024-11-18 10:44:15.726602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.726630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:50.052 [2024-11-18 10:44:15.726750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:50.052 [2024-11-18 10:44:15.726784] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:50.052 [2024-11-18 10:44:15.726835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:50.052 [2024-11-18 10:44:15.726862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:50.052 [2024-11-18 10:44:15.726887] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:50.052 [2024-11-18 10:44:15.726910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:50.052 [2024-11-18 10:44:15.726926] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:50.052 [2024-11-18 10:44:15.726941] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:50.052 [2024-11-18 10:44:15.726989] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:50.052 [2024-11-18 10:44:15.727007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.052 [2024-11-18 10:44:15.727027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:50.052 [2024-11-18 10:44:15.727043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:15:50.052 [2024-11-18 10:44:15.727058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.727131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.052 [2024-11-18 10:44:15.727178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:50.052 [2024-11-18 10:44:15.727193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:50.052 [2024-11-18 10:44:15.727225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.052 [2024-11-18 10:44:15.727317] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:50.052 [2024-11-18 10:44:15.727370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:50.052 [2024-11-18 10:44:15.727390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:50.052 [2024-11-18 10:44:15.727407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:50.052 [2024-11-18 10:44:15.727437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:50.052 [2024-11-18 10:44:15.727497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:50.052 [2024-11-18 10:44:15.727512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:50.052 [2024-11-18 10:44:15.727541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:50.052 [2024-11-18 10:44:15.727586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:50.052 [2024-11-18 10:44:15.727603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:50.052 [2024-11-18 10:44:15.727624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:50.052 [2024-11-18 10:44:15.727638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:50.052 [2024-11-18 10:44:15.727656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:50.052 [2024-11-18 10:44:15.727713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:50.052 [2024-11-18 10:44:15.727727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:50.052 [2024-11-18 10:44:15.727839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:50.052 [2024-11-18 10:44:15.727868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:50.052 [2024-11-18 10:44:15.727883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:50.052 [2024-11-18 10:44:15.727989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:50.052 [2024-11-18 10:44:15.728016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:50.052 [2024-11-18 10:44:15.728091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:50.052 [2024-11-18 10:44:15.728124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:50.052 [2024-11-18 10:44:15.728140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:50.052 [2024-11-18 10:44:15.728240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:50.052 [2024-11-18 10:44:15.728258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:50.052 [2024-11-18 10:44:15.728288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:50.052 [2024-11-18 10:44:15.728361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:50.052 [2024-11-18 10:44:15.728378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:50.052 [2024-11-18 10:44:15.728402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:50.052 [2024-11-18 10:44:15.728417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:50.052 [2024-11-18 10:44:15.728457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:50.052 [2024-11-18 10:44:15.728489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:50.052 [2024-11-18 10:44:15.728503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:50.052 [2024-11-18 10:44:15.728557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:50.052 [2024-11-18 10:44:15.728573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:50.052 [2024-11-18 10:44:15.728642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.052 [2024-11-18 10:44:15.728663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:50.052 [2024-11-18 10:44:15.728678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:50.052 [2024-11-18 10:44:15.728693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:50.052 [2024-11-18 10:44:15.728747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:50.052 [2024-11-18 10:44:15.728765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:50.052 [2024-11-18 10:44:15.728780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:50.052 [2024-11-18 10:44:15.728798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:50.052 [2024-11-18 10:44:15.728844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:50.052 [2024-11-18 10:44:15.728871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:50.052 [2024-11-18 10:44:15.728899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:50.052 [2024-11-18 10:44:15.728923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:50.052 [2024-11-18 10:44:15.728944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:50.052 [2024-11-18 10:44:15.728967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:50.052 [2024-11-18 10:44:15.729034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:50.052 [2024-11-18 10:44:15.729059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:50.052 [2024-11-18 10:44:15.729081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:50.052 [2024-11-18 10:44:15.729105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:50.052 [2024-11-18 10:44:15.729170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:50.053 [2024-11-18 10:44:15.729341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:50.053 [2024-11-18 10:44:15.729365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:50.053 [2024-11-18 10:44:15.729459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:50.053 [2024-11-18 10:44:15.729482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:50.053 [2024-11-18 10:44:15.729503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:50.053 [2024-11-18 10:44:15.729527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.053 [2024-11-18 10:44:15.729582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:50.053 [2024-11-18 10:44:15.729601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:15:50.053 [2024-11-18 10:44:15.729616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.053 [2024-11-18 10:44:15.729656] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:50.053 [2024-11-18 10:44:15.729708] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:54.269 [2024-11-18 10:44:19.544519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.269 [2024-11-18 10:44:19.544857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:54.269 [2024-11-18 10:44:19.545032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3814.833 ms 00:15:54.269 [2024-11-18 10:44:19.545063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.269 [2024-11-18 10:44:19.576472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.269 [2024-11-18 10:44:19.576686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.269 [2024-11-18 10:44:19.576883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.135 ms 00:15:54.269 [2024-11-18 10:44:19.576896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.269 [2024-11-18 10:44:19.577044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.269 [2024-11-18 10:44:19.577056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:54.270 [2024-11-18 10:44:19.577071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:15:54.270 [2024-11-18 10:44:19.577079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.621135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.621186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.270 [2024-11-18 10:44:19.621233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.015 ms 00:15:54.270 [2024-11-18 10:44:19.621244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.621285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.621299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.270 [2024-11-18 10:44:19.621310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:54.270 [2024-11-18 10:44:19.621318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.621926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.621958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.270 [2024-11-18 10:44:19.621971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:15:54.270 [2024-11-18 10:44:19.621979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.622101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.622111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.270 [2024-11-18 10:44:19.622124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:15:54.270 [2024-11-18 10:44:19.622132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.638605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.638650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.270 [2024-11-18 10:44:19.638664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.454 ms 00:15:54.270 [2024-11-18 10:44:19.638673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.652245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:54.270 [2024-11-18 10:44:19.659578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.659782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:54.270 [2024-11-18 10:44:19.659802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.823 ms 00:15:54.270 [2024-11-18 10:44:19.659813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.758422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.758644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:54.270 [2024-11-18 10:44:19.758668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.577 ms 00:15:54.270 [2024-11-18 10:44:19.758680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.758877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.758894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:54.270 [2024-11-18 10:44:19.758903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:54.270 [2024-11-18 10:44:19.758914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.784692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.784748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:54.270 [2024-11-18 10:44:19.784762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:15:54.270 [2024-11-18 10:44:19.784773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.812066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.812118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:54.270 [2024-11-18 10:44:19.812131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.246 ms 00:15:54.270 [2024-11-18 10:44:19.812141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.812808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.812832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:54.270 [2024-11-18 10:44:19.812842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:15:54.270 [2024-11-18 10:44:19.812852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.893665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.893869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:54.270 [2024-11-18 10:44:19.893891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.772 ms 00:15:54.270 [2024-11-18 10:44:19.893902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.921677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.921733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:54.270 [2024-11-18 10:44:19.921746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.693 ms 00:15:54.270 [2024-11-18 10:44:19.921761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.947565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.270 [2024-11-18 10:44:19.947615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:54.270 [2024-11-18 10:44:19.947627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.759 ms 00:15:54.270 [2024-11-18 10:44:19.947637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.270 [2024-11-18 10:44:19.973384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.271 [2024-11-18 10:44:19.973440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:54.271 [2024-11-18 10:44:19.973453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.703 ms 00:15:54.271 [2024-11-18 10:44:19.973463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.271 [2024-11-18 10:44:19.973513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.271 [2024-11-18 10:44:19.973528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:54.271 [2024-11-18 10:44:19.973537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:54.271 [2024-11-18 10:44:19.973548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.271 [2024-11-18 10:44:19.973639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.271 [2024-11-18 10:44:19.973653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:54.271 [2024-11-18 10:44:19.973662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:15:54.271 [2024-11-18 10:44:19.973672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.271 [2024-11-18 10:44:19.974806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4258.366 ms, result 0 00:15:54.271 { 00:15:54.271 "name": "ftl0", 00:15:54.271 "uuid": "b45ea8d2-858a-4a3d-8018-262b7ad21dfe" 00:15:54.271 } 00:15:54.271 10:44:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:54.271 10:44:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:54.271 10:44:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:54.532 10:44:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:54.532 [2024-11-18 10:44:20.315004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:54.532 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:54.532 Zero copy mechanism will not be used. 00:15:54.532 Running I/O for 4 seconds... 00:15:56.871 1206.00 IOPS, 80.09 MiB/s [2024-11-18T10:44:23.329Z] 1138.50 IOPS, 75.60 MiB/s [2024-11-18T10:44:24.722Z] 1109.33 IOPS, 73.67 MiB/s [2024-11-18T10:44:24.722Z] 1017.50 IOPS, 67.57 MiB/s 00:15:58.838 Latency(us) 00:15:58.838 [2024-11-18T10:44:24.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.838 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:58.838 ftl0 : 4.00 1017.19 67.55 0.00 0.00 1039.93 178.81 2571.03 00:15:58.838 [2024-11-18T10:44:24.722Z] =================================================================================================================== 00:15:58.838 [2024-11-18T10:44:24.722Z] Total : 1017.19 67.55 0.00 0.00 1039.93 178.81 2571.03 00:15:58.838 [2024-11-18 10:44:24.327054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:58.838 { 00:15:58.838 "results": [ 00:15:58.839 { 00:15:58.839 "job": "ftl0", 00:15:58.839 "core_mask": "0x1", 00:15:58.839 "workload": "randwrite", 00:15:58.839 "status": "finished", 00:15:58.839 "queue_depth": 1, 00:15:58.839 "io_size": 69632, 00:15:58.839 "runtime": 4.002214, 00:15:58.839 "iops": 1017.186987002694, 00:15:58.839 "mibps": 67.54757335564764, 00:15:58.839 "io_failed": 0, 00:15:58.839 "io_timeout": 0, 00:15:58.839 "avg_latency_us": 1039.9277924531866, 00:15:58.839 "min_latency_us": 178.80615384615385, 00:15:58.839 "max_latency_us": 2571.027692307692 00:15:58.839 } 00:15:58.839 ], 00:15:58.839 "core_count": 1 00:15:58.839 } 00:15:58.839 10:44:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:58.839 [2024-11-18 10:44:24.451459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:58.839 Running I/O for 4 seconds... 00:16:00.732 6610.00 IOPS, 25.82 MiB/s [2024-11-18T10:44:27.564Z] 5990.00 IOPS, 23.40 MiB/s [2024-11-18T10:44:28.510Z] 5916.00 IOPS, 23.11 MiB/s [2024-11-18T10:44:28.510Z] 5726.50 IOPS, 22.37 MiB/s 00:16:02.626 Latency(us) 00:16:02.626 [2024-11-18T10:44:28.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.626 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.626 ftl0 : 4.03 5717.95 22.34 0.00 0.00 22304.71 261.51 56058.49 00:16:02.626 [2024-11-18T10:44:28.510Z] =================================================================================================================== 00:16:02.626 [2024-11-18T10:44:28.510Z] Total : 5717.95 22.34 0.00 0.00 22304.71 0.00 56058.49 00:16:02.626 [2024-11-18 10:44:28.488541] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:02.626 { 00:16:02.626 "results": [ 00:16:02.626 { 00:16:02.626 "job": "ftl0", 00:16:02.626 "core_mask": "0x1", 00:16:02.626 "workload": "randwrite", 00:16:02.626 "status": "finished", 00:16:02.626 "queue_depth": 128, 00:16:02.626 "io_size": 4096, 00:16:02.626 "runtime": 4.027842, 00:16:02.626 "iops": 5717.950207580138, 00:16:02.626 "mibps": 22.335742998359915, 00:16:02.626 "io_failed": 0, 00:16:02.626 "io_timeout": 0, 00:16:02.626 "avg_latency_us": 22304.706280164195, 00:16:02.626 "min_latency_us": 261.51384615384615, 00:16:02.626 "max_latency_us": 56058.486153846156 00:16:02.626 } 00:16:02.626 ], 00:16:02.626 "core_count": 1 00:16:02.626 } 00:16:02.627 10:44:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:02.888 [2024-11-18 10:44:28.614460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:02.888 Running I/O for 4 seconds... 00:16:04.790 4966.00 IOPS, 19.40 MiB/s [2024-11-18T10:44:32.063Z] 4812.50 IOPS, 18.80 MiB/s [2024-11-18T10:44:32.637Z] 4741.00 IOPS, 18.52 MiB/s [2024-11-18T10:44:32.637Z] 4787.00 IOPS, 18.70 MiB/s 00:16:06.753 Latency(us) 00:16:06.753 [2024-11-18T10:44:32.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.753 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.753 Verification LBA range: start 0x0 length 0x1400000 00:16:06.753 ftl0 : 4.01 4805.33 18.77 0.00 0.00 26570.69 286.72 106470.79 00:16:06.753 [2024-11-18T10:44:32.637Z] =================================================================================================================== 00:16:06.753 [2024-11-18T10:44:32.637Z] Total : 4805.33 18.77 0.00 0.00 26570.69 0.00 106470.79 00:16:07.015 [2024-11-18 10:44:32.642650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:07.015 { 00:16:07.015 "results": [ 00:16:07.015 { 00:16:07.015 "job": "ftl0", 00:16:07.015 "core_mask": "0x1", 00:16:07.015 "workload": "verify", 00:16:07.015 "status": "finished", 00:16:07.015 "verify_range": { 00:16:07.015 "start": 0, 00:16:07.015 "length": 20971520 00:16:07.015 }, 00:16:07.015 "queue_depth": 128, 00:16:07.015 "io_size": 4096, 00:16:07.015 "runtime": 4.011379, 00:16:07.015 "iops": 4805.330037376174, 00:16:07.015 "mibps": 18.77082045850068, 00:16:07.015 "io_failed": 0, 00:16:07.015 "io_timeout": 0, 00:16:07.015 "avg_latency_us": 26570.689502769485, 00:16:07.015 "min_latency_us": 286.72, 00:16:07.015 "max_latency_us": 106470.79384615384 00:16:07.015 } 00:16:07.015 ], 00:16:07.015 "core_count": 1 00:16:07.015 } 00:16:07.015 10:44:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:07.015 [2024-11-18 10:44:32.853673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.015 [2024-11-18 10:44:32.853889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:07.015 [2024-11-18 10:44:32.854035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:07.015 [2024-11-18 10:44:32.854053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.015 [2024-11-18 10:44:32.854086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:07.015 [2024-11-18 10:44:32.857168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.015 [2024-11-18 10:44:32.857343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:07.015 [2024-11-18 10:44:32.857369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:16:07.015 [2024-11-18 10:44:32.857378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.015 [2024-11-18 10:44:32.860368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.015 [2024-11-18 10:44:32.860524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:07.015 [2024-11-18 10:44:32.860546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:16:07.015 [2024-11-18 10:44:32.860554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.277 [2024-11-18 10:44:33.084718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.277 [2024-11-18 10:44:33.084917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:07.277 [2024-11-18 10:44:33.084962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 224.130 ms 00:16:07.277 [2024-11-18 10:44:33.084972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.277 [2024-11-18 10:44:33.091151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.277 [2024-11-18 10:44:33.091194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:07.277 [2024-11-18 10:44:33.091226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:16:07.277 [2024-11-18 10:44:33.091236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.277 [2024-11-18 10:44:33.117674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.277 [2024-11-18 10:44:33.117724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:07.277 [2024-11-18 10:44:33.117740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.365 ms 00:16:07.277 [2024-11-18 10:44:33.117748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.277 [2024-11-18 10:44:33.135043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.277 [2024-11-18 10:44:33.135229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:07.277 [2024-11-18 10:44:33.135258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.248 ms 00:16:07.277 [2024-11-18 10:44:33.135266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.277 [2024-11-18 10:44:33.135417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.277 [2024-11-18 10:44:33.135429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:07.277 [2024-11-18 10:44:33.135443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:07.277 [2024-11-18 10:44:33.135450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.540 [2024-11-18 10:44:33.161124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.540 [2024-11-18 10:44:33.161169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:07.540 [2024-11-18 10:44:33.161184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.655 ms 00:16:07.540 [2024-11-18 10:44:33.161192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.540 [2024-11-18 10:44:33.185830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.540 [2024-11-18 10:44:33.185873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:07.540 [2024-11-18 10:44:33.185887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.572 ms 00:16:07.540 [2024-11-18 10:44:33.185894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.540 [2024-11-18 10:44:33.210231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.540 [2024-11-18 10:44:33.210403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:07.540 [2024-11-18 10:44:33.210427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.289 ms 00:16:07.540 [2024-11-18 10:44:33.210435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.540 [2024-11-18 10:44:33.234444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.540 [2024-11-18 10:44:33.234487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:07.540 [2024-11-18 10:44:33.234504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.879 ms 00:16:07.540 [2024-11-18 10:44:33.234511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.540 [2024-11-18 10:44:33.234555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:07.540 [2024-11-18 10:44:33.234570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.234998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.235005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.235014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.235022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:07.540 [2024-11-18 10:44:33.235031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:07.541 [2024-11-18 10:44:33.235497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:07.541 [2024-11-18 10:44:33.235506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b45ea8d2-858a-4a3d-8018-262b7ad21dfe 00:16:07.541 [2024-11-18 10:44:33.235514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:07.541 [2024-11-18 10:44:33.235524] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:07.541 [2024-11-18 10:44:33.235533] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:07.541 [2024-11-18 10:44:33.235544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:07.541 [2024-11-18 10:44:33.235551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:07.541 [2024-11-18 10:44:33.235561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:07.541 [2024-11-18 10:44:33.235577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:07.541 [2024-11-18 10:44:33.235587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:07.541 [2024-11-18 10:44:33.235594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:07.541 [2024-11-18 10:44:33.235604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.541 [2024-11-18 10:44:33.235612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:07.541 [2024-11-18 10:44:33.235623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:16:07.541 [2024-11-18 10:44:33.235631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.249499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.541 [2024-11-18 10:44:33.249666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:07.541 [2024-11-18 10:44:33.249688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.830 ms 00:16:07.541 [2024-11-18 10:44:33.249696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.250092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.541 [2024-11-18 10:44:33.250102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:07.541 [2024-11-18 10:44:33.250113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:16:07.541 [2024-11-18 10:44:33.250121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.288685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.541 [2024-11-18 10:44:33.288861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:07.541 [2024-11-18 10:44:33.288889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.541 [2024-11-18 10:44:33.288898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.288973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.541 [2024-11-18 10:44:33.288982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:07.541 [2024-11-18 10:44:33.288993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.541 [2024-11-18 10:44:33.289000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.289101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.541 [2024-11-18 10:44:33.289116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:07.541 [2024-11-18 10:44:33.289126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.541 [2024-11-18 10:44:33.289134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.289153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.541 [2024-11-18 10:44:33.289162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:07.541 [2024-11-18 10:44:33.289172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.541 [2024-11-18 10:44:33.289180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.541 [2024-11-18 10:44:33.374134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.541 [2024-11-18 10:44:33.374187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:07.541 [2024-11-18 10:44:33.374230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.541 [2024-11-18 10:44:33.374240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.443486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.443705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:07.803 [2024-11-18 10:44:33.443729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.443738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.443828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.443839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:07.803 [2024-11-18 10:44:33.443853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.443861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.443925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.443935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:07.803 [2024-11-18 10:44:33.443947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.443955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.444065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.444075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:07.803 [2024-11-18 10:44:33.444092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.444100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.444134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.444144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:07.803 [2024-11-18 10:44:33.444154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.444162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.444236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.444247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:07.803 [2024-11-18 10:44:33.444258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.444268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.444317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.803 [2024-11-18 10:44:33.444334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:07.803 [2024-11-18 10:44:33.444345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.803 [2024-11-18 10:44:33.444353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.803 [2024-11-18 10:44:33.444524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.798 ms, result 0 00:16:07.803 true 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73115 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73115 ']' 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73115 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73115 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.803 killing process with pid 73115 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73115' 00:16:07.803 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73115 00:16:07.803 Received shutdown signal, test time was about 4.000000 seconds 00:16:07.803 00:16:07.803 Latency(us) 00:16:07.803 [2024-11-18T10:44:33.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.803 [2024-11-18T10:44:33.688Z] =================================================================================================================== 00:16:07.804 [2024-11-18T10:44:33.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.804 10:44:33 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73115 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:08.747 Remove shared memory files 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:08.747 ************************************ 00:16:08.747 END TEST ftl_bdevperf 00:16:08.747 ************************************ 00:16:08.747 00:16:08.747 real 0m22.545s 00:16:08.747 user 0m25.089s 00:16:08.747 sys 0m0.985s 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.747 10:44:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 10:44:34 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:08.747 10:44:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:08.747 10:44:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.747 10:44:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 ************************************ 00:16:08.747 START TEST ftl_trim 00:16:08.747 ************************************ 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:08.747 * Looking for test storage... 00:16:08.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.747 10:44:34 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.747 --rc genhtml_branch_coverage=1 00:16:08.747 --rc genhtml_function_coverage=1 00:16:08.747 --rc genhtml_legend=1 00:16:08.747 --rc geninfo_all_blocks=1 00:16:08.747 --rc geninfo_unexecuted_blocks=1 00:16:08.747 00:16:08.747 ' 00:16:08.747 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.747 --rc genhtml_branch_coverage=1 00:16:08.747 --rc genhtml_function_coverage=1 00:16:08.747 --rc genhtml_legend=1 00:16:08.748 --rc geninfo_all_blocks=1 00:16:08.748 --rc geninfo_unexecuted_blocks=1 00:16:08.748 00:16:08.748 ' 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.748 --rc genhtml_branch_coverage=1 00:16:08.748 --rc genhtml_function_coverage=1 00:16:08.748 --rc genhtml_legend=1 00:16:08.748 --rc geninfo_all_blocks=1 00:16:08.748 --rc geninfo_unexecuted_blocks=1 00:16:08.748 00:16:08.748 ' 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:08.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.748 --rc genhtml_branch_coverage=1 00:16:08.748 --rc genhtml_function_coverage=1 00:16:08.748 --rc genhtml_legend=1 00:16:08.748 --rc geninfo_all_blocks=1 00:16:08.748 --rc geninfo_unexecuted_blocks=1 00:16:08.748 00:16:08.748 ' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73473 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73473 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73473 ']' 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.748 10:44:34 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.748 10:44:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:09.009 [2024-11-18 10:44:34.630390] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:09.009 [2024-11-18 10:44:34.630788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73473 ] 00:16:09.009 [2024-11-18 10:44:34.800186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.269 [2024-11-18 10:44:34.922995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.269 [2024-11-18 10:44:34.923329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.269 [2024-11-18 10:44:34.923462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.841 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.841 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:09.841 10:44:35 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:10.101 10:44:35 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:10.101 10:44:35 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:10.101 10:44:35 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:10.101 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:10.101 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:10.101 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:10.101 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:10.101 10:44:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:10.362 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:10.362 { 00:16:10.362 "name": "nvme0n1", 00:16:10.362 "aliases": [ 00:16:10.362 "72a6b812-40d1-4173-9d5c-55cf6ce3c9ec" 00:16:10.362 ], 00:16:10.362 "product_name": "NVMe disk", 00:16:10.362 "block_size": 4096, 00:16:10.362 "num_blocks": 1310720, 00:16:10.362 "uuid": "72a6b812-40d1-4173-9d5c-55cf6ce3c9ec", 00:16:10.362 "numa_id": -1, 00:16:10.362 "assigned_rate_limits": { 00:16:10.362 "rw_ios_per_sec": 0, 00:16:10.362 "rw_mbytes_per_sec": 0, 00:16:10.362 "r_mbytes_per_sec": 0, 00:16:10.362 "w_mbytes_per_sec": 0 00:16:10.362 }, 00:16:10.362 "claimed": true, 00:16:10.362 "claim_type": "read_many_write_one", 00:16:10.362 "zoned": false, 00:16:10.362 "supported_io_types": { 00:16:10.362 "read": true, 00:16:10.362 "write": true, 00:16:10.362 "unmap": true, 00:16:10.362 "flush": true, 00:16:10.362 "reset": true, 00:16:10.362 "nvme_admin": true, 00:16:10.362 "nvme_io": true, 00:16:10.362 "nvme_io_md": false, 00:16:10.362 "write_zeroes": true, 00:16:10.362 "zcopy": false, 00:16:10.362 "get_zone_info": false, 00:16:10.362 "zone_management": false, 00:16:10.362 "zone_append": false, 00:16:10.362 "compare": true, 00:16:10.362 "compare_and_write": false, 00:16:10.362 "abort": true, 00:16:10.362 "seek_hole": false, 00:16:10.362 "seek_data": false, 00:16:10.362 "copy": true, 00:16:10.362 "nvme_iov_md": false 00:16:10.362 }, 00:16:10.362 "driver_specific": { 00:16:10.362 "nvme": [ 00:16:10.362 { 00:16:10.362 "pci_address": "0000:00:11.0", 00:16:10.362 "trid": { 00:16:10.362 "trtype": "PCIe", 00:16:10.362 "traddr": "0000:00:11.0" 00:16:10.362 }, 00:16:10.362 "ctrlr_data": { 00:16:10.362 "cntlid": 0, 00:16:10.362 "vendor_id": "0x1b36", 00:16:10.362 "model_number": "QEMU NVMe Ctrl", 00:16:10.362 "serial_number": "12341", 00:16:10.362 "firmware_revision": "8.0.0", 00:16:10.362 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:10.362 "oacs": { 00:16:10.362 "security": 0, 00:16:10.362 "format": 1, 00:16:10.362 "firmware": 0, 00:16:10.362 "ns_manage": 1 00:16:10.362 }, 00:16:10.362 "multi_ctrlr": false, 00:16:10.362 "ana_reporting": false 00:16:10.362 }, 00:16:10.362 "vs": { 00:16:10.362 "nvme_version": "1.4" 00:16:10.362 }, 00:16:10.362 "ns_data": { 00:16:10.362 "id": 1, 00:16:10.362 "can_share": false 00:16:10.362 } 00:16:10.362 } 00:16:10.362 ], 00:16:10.362 "mp_policy": "active_passive" 00:16:10.362 } 00:16:10.362 } 00:16:10.362 ]' 00:16:10.362 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:10.362 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:10.362 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:10.362 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:10.363 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:10.363 10:44:36 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:10.363 10:44:36 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:10.363 10:44:36 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:10.363 10:44:36 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:10.363 10:44:36 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:10.363 10:44:36 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:10.623 10:44:36 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f15d1bb0-32bd-4e5d-baf0-ebcec62c951e 00:16:10.623 10:44:36 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:10.623 10:44:36 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f15d1bb0-32bd-4e5d-baf0-ebcec62c951e 00:16:10.884 10:44:36 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:11.144 10:44:36 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=822a6030-028f-4116-bf27-f8c281d4b2f2 00:16:11.144 10:44:36 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 822a6030-028f-4116-bf27-f8c281d4b2f2 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:11.406 10:44:37 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.406 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.406 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:11.406 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:11.406 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:11.406 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:11.667 { 00:16:11.667 "name": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:11.667 "aliases": [ 00:16:11.667 "lvs/nvme0n1p0" 00:16:11.667 ], 00:16:11.667 "product_name": "Logical Volume", 00:16:11.667 "block_size": 4096, 00:16:11.667 "num_blocks": 26476544, 00:16:11.667 "uuid": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:11.667 "assigned_rate_limits": { 00:16:11.667 "rw_ios_per_sec": 0, 00:16:11.667 "rw_mbytes_per_sec": 0, 00:16:11.667 "r_mbytes_per_sec": 0, 00:16:11.667 "w_mbytes_per_sec": 0 00:16:11.667 }, 00:16:11.667 "claimed": false, 00:16:11.667 "zoned": false, 00:16:11.667 "supported_io_types": { 00:16:11.667 "read": true, 00:16:11.667 "write": true, 00:16:11.667 "unmap": true, 00:16:11.667 "flush": false, 00:16:11.667 "reset": true, 00:16:11.667 "nvme_admin": false, 00:16:11.667 "nvme_io": false, 00:16:11.667 "nvme_io_md": false, 00:16:11.667 "write_zeroes": true, 00:16:11.667 "zcopy": false, 00:16:11.667 "get_zone_info": false, 00:16:11.667 "zone_management": false, 00:16:11.667 "zone_append": false, 00:16:11.667 "compare": false, 00:16:11.667 "compare_and_write": false, 00:16:11.667 "abort": false, 00:16:11.667 "seek_hole": true, 00:16:11.667 "seek_data": true, 00:16:11.667 "copy": false, 00:16:11.667 "nvme_iov_md": false 00:16:11.667 }, 00:16:11.667 "driver_specific": { 00:16:11.667 "lvol": { 00:16:11.667 "lvol_store_uuid": "822a6030-028f-4116-bf27-f8c281d4b2f2", 00:16:11.667 "base_bdev": "nvme0n1", 00:16:11.667 "thin_provision": true, 00:16:11.667 "num_allocated_clusters": 0, 00:16:11.667 "snapshot": false, 00:16:11.667 "clone": false, 00:16:11.667 "esnap_clone": false 00:16:11.667 } 00:16:11.667 } 00:16:11.667 } 00:16:11.667 ]' 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:11.667 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:11.667 10:44:37 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:11.667 10:44:37 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:11.667 10:44:37 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:11.927 10:44:37 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:11.927 10:44:37 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:11.927 10:44:37 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.927 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:11.927 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:11.927 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:11.927 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:11.927 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:12.187 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:12.187 { 00:16:12.187 "name": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:12.187 "aliases": [ 00:16:12.187 "lvs/nvme0n1p0" 00:16:12.187 ], 00:16:12.187 "product_name": "Logical Volume", 00:16:12.187 "block_size": 4096, 00:16:12.187 "num_blocks": 26476544, 00:16:12.187 "uuid": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:12.187 "assigned_rate_limits": { 00:16:12.187 "rw_ios_per_sec": 0, 00:16:12.187 "rw_mbytes_per_sec": 0, 00:16:12.187 "r_mbytes_per_sec": 0, 00:16:12.187 "w_mbytes_per_sec": 0 00:16:12.187 }, 00:16:12.187 "claimed": false, 00:16:12.188 "zoned": false, 00:16:12.188 "supported_io_types": { 00:16:12.188 "read": true, 00:16:12.188 "write": true, 00:16:12.188 "unmap": true, 00:16:12.188 "flush": false, 00:16:12.188 "reset": true, 00:16:12.188 "nvme_admin": false, 00:16:12.188 "nvme_io": false, 00:16:12.188 "nvme_io_md": false, 00:16:12.188 "write_zeroes": true, 00:16:12.188 "zcopy": false, 00:16:12.188 "get_zone_info": false, 00:16:12.188 "zone_management": false, 00:16:12.188 "zone_append": false, 00:16:12.188 "compare": false, 00:16:12.188 "compare_and_write": false, 00:16:12.188 "abort": false, 00:16:12.188 "seek_hole": true, 00:16:12.188 "seek_data": true, 00:16:12.188 "copy": false, 00:16:12.188 "nvme_iov_md": false 00:16:12.188 }, 00:16:12.188 "driver_specific": { 00:16:12.188 "lvol": { 00:16:12.188 "lvol_store_uuid": "822a6030-028f-4116-bf27-f8c281d4b2f2", 00:16:12.188 "base_bdev": "nvme0n1", 00:16:12.188 "thin_provision": true, 00:16:12.188 "num_allocated_clusters": 0, 00:16:12.188 "snapshot": false, 00:16:12.188 "clone": false, 00:16:12.188 "esnap_clone": false 00:16:12.188 } 00:16:12.188 } 00:16:12.188 } 00:16:12.188 ]' 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:12.188 10:44:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:12.188 10:44:37 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:12.188 10:44:37 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:12.448 10:44:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:12.448 10:44:38 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:12.448 10:44:38 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:12.448 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:12.448 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:12.448 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:12.448 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:12.448 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 024bb752-448f-44f8-ba99-93dfe9cd955c 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:12.708 { 00:16:12.708 "name": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:12.708 "aliases": [ 00:16:12.708 "lvs/nvme0n1p0" 00:16:12.708 ], 00:16:12.708 "product_name": "Logical Volume", 00:16:12.708 "block_size": 4096, 00:16:12.708 "num_blocks": 26476544, 00:16:12.708 "uuid": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:12.708 "assigned_rate_limits": { 00:16:12.708 "rw_ios_per_sec": 0, 00:16:12.708 "rw_mbytes_per_sec": 0, 00:16:12.708 "r_mbytes_per_sec": 0, 00:16:12.708 "w_mbytes_per_sec": 0 00:16:12.708 }, 00:16:12.708 "claimed": false, 00:16:12.708 "zoned": false, 00:16:12.708 "supported_io_types": { 00:16:12.708 "read": true, 00:16:12.708 "write": true, 00:16:12.708 "unmap": true, 00:16:12.708 "flush": false, 00:16:12.708 "reset": true, 00:16:12.708 "nvme_admin": false, 00:16:12.708 "nvme_io": false, 00:16:12.708 "nvme_io_md": false, 00:16:12.708 "write_zeroes": true, 00:16:12.708 "zcopy": false, 00:16:12.708 "get_zone_info": false, 00:16:12.708 "zone_management": false, 00:16:12.708 "zone_append": false, 00:16:12.708 "compare": false, 00:16:12.708 "compare_and_write": false, 00:16:12.708 "abort": false, 00:16:12.708 "seek_hole": true, 00:16:12.708 "seek_data": true, 00:16:12.708 "copy": false, 00:16:12.708 "nvme_iov_md": false 00:16:12.708 }, 00:16:12.708 "driver_specific": { 00:16:12.708 "lvol": { 00:16:12.708 "lvol_store_uuid": "822a6030-028f-4116-bf27-f8c281d4b2f2", 00:16:12.708 "base_bdev": "nvme0n1", 00:16:12.708 "thin_provision": true, 00:16:12.708 "num_allocated_clusters": 0, 00:16:12.708 "snapshot": false, 00:16:12.708 "clone": false, 00:16:12.708 "esnap_clone": false 00:16:12.708 } 00:16:12.708 } 00:16:12.708 } 00:16:12.708 ]' 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:12.708 10:44:38 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:12.708 10:44:38 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:12.708 10:44:38 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 024bb752-448f-44f8-ba99-93dfe9cd955c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:12.971 [2024-11-18 10:44:38.592136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.592172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.971 [2024-11-18 10:44:38.592186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:12.971 [2024-11-18 10:44:38.592193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.594440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.594468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.971 [2024-11-18 10:44:38.594477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:16:12.971 [2024-11-18 10:44:38.594483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.594557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.971 [2024-11-18 10:44:38.595171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.971 [2024-11-18 10:44:38.595202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.595218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.971 [2024-11-18 10:44:38.595226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:16:12.971 [2024-11-18 10:44:38.595232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.595333] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:16:12.971 [2024-11-18 10:44:38.596323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.596352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:12.971 [2024-11-18 10:44:38.596359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:12.971 [2024-11-18 10:44:38.596367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.601576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.601681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.971 [2024-11-18 10:44:38.601692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:16:12.971 [2024-11-18 10:44:38.601700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.601802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.601811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.971 [2024-11-18 10:44:38.601818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:12.971 [2024-11-18 10:44:38.601827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.601862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.601869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.971 [2024-11-18 10:44:38.601875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:12.971 [2024-11-18 10:44:38.601884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.601914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:12.971 [2024-11-18 10:44:38.604830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.604921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.971 [2024-11-18 10:44:38.604936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:16:12.971 [2024-11-18 10:44:38.604943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.604980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.604987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.971 [2024-11-18 10:44:38.604994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:12.971 [2024-11-18 10:44:38.605011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.971 [2024-11-18 10:44:38.605036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:12.971 [2024-11-18 10:44:38.605140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:12.971 [2024-11-18 10:44:38.605152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.971 [2024-11-18 10:44:38.605160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:12.971 [2024-11-18 10:44:38.605169] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.971 [2024-11-18 10:44:38.605176] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.971 [2024-11-18 10:44:38.605183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:12.971 [2024-11-18 10:44:38.605189] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.971 [2024-11-18 10:44:38.605197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:12.971 [2024-11-18 10:44:38.605217] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:12.971 [2024-11-18 10:44:38.605225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.971 [2024-11-18 10:44:38.605230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.972 [2024-11-18 10:44:38.605238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:12.972 [2024-11-18 10:44:38.605244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.972 [2024-11-18 10:44:38.605323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.972 [2024-11-18 10:44:38.605329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.972 [2024-11-18 10:44:38.605336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:12.972 [2024-11-18 10:44:38.605341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.972 [2024-11-18 10:44:38.605446] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.972 [2024-11-18 10:44:38.605453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.972 [2024-11-18 10:44:38.605460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.972 [2024-11-18 10:44:38.605478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.972 [2024-11-18 10:44:38.605496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.972 [2024-11-18 10:44:38.605507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.972 [2024-11-18 10:44:38.605512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:12.972 [2024-11-18 10:44:38.605519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.972 [2024-11-18 10:44:38.605524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.972 [2024-11-18 10:44:38.605530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:12.972 [2024-11-18 10:44:38.605535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.972 [2024-11-18 10:44:38.605548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.972 [2024-11-18 10:44:38.605567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.972 [2024-11-18 10:44:38.605583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.972 [2024-11-18 10:44:38.605600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.972 [2024-11-18 10:44:38.605618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.972 [2024-11-18 10:44:38.605636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.972 [2024-11-18 10:44:38.605648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.972 [2024-11-18 10:44:38.605652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:12.972 [2024-11-18 10:44:38.605659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.972 [2024-11-18 10:44:38.605663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:12.972 [2024-11-18 10:44:38.605670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:12.972 [2024-11-18 10:44:38.605675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:12.972 [2024-11-18 10:44:38.605686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:12.972 [2024-11-18 10:44:38.605692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605696] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.972 [2024-11-18 10:44:38.605704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.972 [2024-11-18 10:44:38.605709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.972 [2024-11-18 10:44:38.605723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.972 [2024-11-18 10:44:38.605731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.972 [2024-11-18 10:44:38.605736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.972 [2024-11-18 10:44:38.605743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.972 [2024-11-18 10:44:38.605748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.972 [2024-11-18 10:44:38.605754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.972 [2024-11-18 10:44:38.605761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.972 [2024-11-18 10:44:38.605770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:12.972 [2024-11-18 10:44:38.605786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:12.972 [2024-11-18 10:44:38.605792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:12.972 [2024-11-18 10:44:38.605798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:12.972 [2024-11-18 10:44:38.605804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:12.972 [2024-11-18 10:44:38.605811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:12.972 [2024-11-18 10:44:38.605816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:12.972 [2024-11-18 10:44:38.605823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:12.972 [2024-11-18 10:44:38.605829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:12.972 [2024-11-18 10:44:38.605837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:12.972 [2024-11-18 10:44:38.605868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.972 [2024-11-18 10:44:38.605875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.972 [2024-11-18 10:44:38.605888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.972 [2024-11-18 10:44:38.605893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.972 [2024-11-18 10:44:38.605900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.972 [2024-11-18 10:44:38.605906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.972 [2024-11-18 10:44:38.605913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.973 [2024-11-18 10:44:38.605919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:16:12.973 [2024-11-18 10:44:38.605925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.973 [2024-11-18 10:44:38.606009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:12.973 [2024-11-18 10:44:38.606020] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:15.507 [2024-11-18 10:44:40.934504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.507 [2024-11-18 10:44:40.934687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:15.507 [2024-11-18 10:44:40.934710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2328.485 ms 00:16:15.507 [2024-11-18 10:44:40.934720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.507 [2024-11-18 10:44:40.960630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.507 [2024-11-18 10:44:40.960670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:15.507 [2024-11-18 10:44:40.960682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:16:15.507 [2024-11-18 10:44:40.960692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.507 [2024-11-18 10:44:40.960826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.507 [2024-11-18 10:44:40.960838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:15.507 [2024-11-18 10:44:40.960846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:15.507 [2024-11-18 10:44:40.960860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.507 [2024-11-18 10:44:41.003335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.507 [2024-11-18 10:44:41.003391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:15.507 [2024-11-18 10:44:41.003409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.425 ms 00:16:15.507 [2024-11-18 10:44:41.003426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.507 [2024-11-18 10:44:41.003558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.003579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:15.508 [2024-11-18 10:44:41.003592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:15.508 [2024-11-18 10:44:41.003606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.004011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.004043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:15.508 [2024-11-18 10:44:41.004058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:16:15.508 [2024-11-18 10:44:41.004072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.004259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.004277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:15.508 [2024-11-18 10:44:41.004288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:15.508 [2024-11-18 10:44:41.004305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.020505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.020534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:15.508 [2024-11-18 10:44:41.020544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.136 ms 00:16:15.508 [2024-11-18 10:44:41.020552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.031931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:15.508 [2024-11-18 10:44:41.046668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.046697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:15.508 [2024-11-18 10:44:41.046709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.011 ms 00:16:15.508 [2024-11-18 10:44:41.046716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.112141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.112182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:15.508 [2024-11-18 10:44:41.112197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.353 ms 00:16:15.508 [2024-11-18 10:44:41.112216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.112460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.112473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:15.508 [2024-11-18 10:44:41.112486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:15.508 [2024-11-18 10:44:41.112493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.135759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.135790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:15.508 [2024-11-18 10:44:41.135802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.232 ms 00:16:15.508 [2024-11-18 10:44:41.135812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.158439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.158467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:15.508 [2024-11-18 10:44:41.158480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:16:15.508 [2024-11-18 10:44:41.158487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.159069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.159086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:15.508 [2024-11-18 10:44:41.159096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:16:15.508 [2024-11-18 10:44:41.159103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.237041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.237092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:15.508 [2024-11-18 10:44:41.237109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.897 ms 00:16:15.508 [2024-11-18 10:44:41.237117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.261254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.261288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:15.508 [2024-11-18 10:44:41.261301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.003 ms 00:16:15.508 [2024-11-18 10:44:41.261309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.284287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.284317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:15.508 [2024-11-18 10:44:41.284329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.908 ms 00:16:15.508 [2024-11-18 10:44:41.284336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.307949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.307980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:15.508 [2024-11-18 10:44:41.307992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:16:15.508 [2024-11-18 10:44:41.308013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.308081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.308091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:15.508 [2024-11-18 10:44:41.308103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:15.508 [2024-11-18 10:44:41.308110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.308191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.508 [2024-11-18 10:44:41.308199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:15.508 [2024-11-18 10:44:41.308221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:15.508 [2024-11-18 10:44:41.308228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.508 [2024-11-18 10:44:41.309067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.508 [2024-11-18 10:44:41.311930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2716.595 ms, result 0 00:16:15.508 [2024-11-18 10:44:41.312903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:15.508 { 00:16:15.508 "name": "ftl0", 00:16:15.508 "uuid": "b69bf4f7-41ea-4a56-b153-3b2fc53c436b" 00:16:15.508 } 00:16:15.508 10:44:41 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.508 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:15.767 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:16.026 [ 00:16:16.026 { 00:16:16.026 "name": "ftl0", 00:16:16.026 "aliases": [ 00:16:16.026 "b69bf4f7-41ea-4a56-b153-3b2fc53c436b" 00:16:16.026 ], 00:16:16.026 "product_name": "FTL disk", 00:16:16.026 "block_size": 4096, 00:16:16.026 "num_blocks": 23592960, 00:16:16.026 "uuid": "b69bf4f7-41ea-4a56-b153-3b2fc53c436b", 00:16:16.026 "assigned_rate_limits": { 00:16:16.026 "rw_ios_per_sec": 0, 00:16:16.026 "rw_mbytes_per_sec": 0, 00:16:16.026 "r_mbytes_per_sec": 0, 00:16:16.026 "w_mbytes_per_sec": 0 00:16:16.026 }, 00:16:16.026 "claimed": false, 00:16:16.026 "zoned": false, 00:16:16.026 "supported_io_types": { 00:16:16.026 "read": true, 00:16:16.026 "write": true, 00:16:16.026 "unmap": true, 00:16:16.026 "flush": true, 00:16:16.026 "reset": false, 00:16:16.026 "nvme_admin": false, 00:16:16.026 "nvme_io": false, 00:16:16.026 "nvme_io_md": false, 00:16:16.026 "write_zeroes": true, 00:16:16.026 "zcopy": false, 00:16:16.026 "get_zone_info": false, 00:16:16.026 "zone_management": false, 00:16:16.026 "zone_append": false, 00:16:16.026 "compare": false, 00:16:16.026 "compare_and_write": false, 00:16:16.026 "abort": false, 00:16:16.026 "seek_hole": false, 00:16:16.026 "seek_data": false, 00:16:16.026 "copy": false, 00:16:16.026 "nvme_iov_md": false 00:16:16.026 }, 00:16:16.026 "driver_specific": { 00:16:16.026 "ftl": { 00:16:16.026 "base_bdev": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:16.026 "cache": "nvc0n1p0" 00:16:16.026 } 00:16:16.026 } 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 10:44:41 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:16.026 10:44:41 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:16.026 10:44:41 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:16.284 10:44:41 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:16.284 10:44:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:16.284 10:44:42 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:16.284 { 00:16:16.284 "name": "ftl0", 00:16:16.284 "aliases": [ 00:16:16.284 "b69bf4f7-41ea-4a56-b153-3b2fc53c436b" 00:16:16.284 ], 00:16:16.284 "product_name": "FTL disk", 00:16:16.284 "block_size": 4096, 00:16:16.284 "num_blocks": 23592960, 00:16:16.284 "uuid": "b69bf4f7-41ea-4a56-b153-3b2fc53c436b", 00:16:16.284 "assigned_rate_limits": { 00:16:16.284 "rw_ios_per_sec": 0, 00:16:16.284 "rw_mbytes_per_sec": 0, 00:16:16.284 "r_mbytes_per_sec": 0, 00:16:16.284 "w_mbytes_per_sec": 0 00:16:16.284 }, 00:16:16.284 "claimed": false, 00:16:16.284 "zoned": false, 00:16:16.284 "supported_io_types": { 00:16:16.284 "read": true, 00:16:16.284 "write": true, 00:16:16.284 "unmap": true, 00:16:16.284 "flush": true, 00:16:16.284 "reset": false, 00:16:16.284 "nvme_admin": false, 00:16:16.284 "nvme_io": false, 00:16:16.284 "nvme_io_md": false, 00:16:16.284 "write_zeroes": true, 00:16:16.284 "zcopy": false, 00:16:16.284 "get_zone_info": false, 00:16:16.284 "zone_management": false, 00:16:16.284 "zone_append": false, 00:16:16.284 "compare": false, 00:16:16.284 "compare_and_write": false, 00:16:16.284 "abort": false, 00:16:16.284 "seek_hole": false, 00:16:16.284 "seek_data": false, 00:16:16.284 "copy": false, 00:16:16.284 "nvme_iov_md": false 00:16:16.284 }, 00:16:16.284 "driver_specific": { 00:16:16.284 "ftl": { 00:16:16.284 "base_bdev": "024bb752-448f-44f8-ba99-93dfe9cd955c", 00:16:16.284 "cache": "nvc0n1p0" 00:16:16.284 } 00:16:16.284 } 00:16:16.284 } 00:16:16.284 ]' 00:16:16.284 10:44:42 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:16.284 10:44:42 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:16.284 10:44:42 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:16.543 [2024-11-18 10:44:42.348652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.348693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:16.543 [2024-11-18 10:44:42.348707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.543 [2024-11-18 10:44:42.348716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.348754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:16.543 [2024-11-18 10:44:42.351361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.351387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:16.543 [2024-11-18 10:44:42.351404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:16:16.543 [2024-11-18 10:44:42.351412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.352027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.352043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:16.543 [2024-11-18 10:44:42.352053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:16:16.543 [2024-11-18 10:44:42.352061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.355709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.355729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:16.543 [2024-11-18 10:44:42.355740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:16:16.543 [2024-11-18 10:44:42.355749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.362747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.362773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:16.543 [2024-11-18 10:44:42.362784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:16:16.543 [2024-11-18 10:44:42.362791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.386726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.386758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:16.543 [2024-11-18 10:44:42.386773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.856 ms 00:16:16.543 [2024-11-18 10:44:42.386780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.401623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.401652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:16.543 [2024-11-18 10:44:42.401667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:16:16.543 [2024-11-18 10:44:42.401675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.401892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.401902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:16.543 [2024-11-18 10:44:42.401912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:16.543 [2024-11-18 10:44:42.401920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.543 [2024-11-18 10:44:42.424970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.543 [2024-11-18 10:44:42.424997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:16.543 [2024-11-18 10:44:42.425009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.017 ms 00:16:16.543 [2024-11-18 10:44:42.425016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.803 [2024-11-18 10:44:42.447665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.803 [2024-11-18 10:44:42.447692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:16.803 [2024-11-18 10:44:42.447705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.581 ms 00:16:16.803 [2024-11-18 10:44:42.447712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.803 [2024-11-18 10:44:42.469945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.803 [2024-11-18 10:44:42.469972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:16.803 [2024-11-18 10:44:42.469983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.174 ms 00:16:16.803 [2024-11-18 10:44:42.469990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.803 [2024-11-18 10:44:42.492584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.803 [2024-11-18 10:44:42.492611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:16.803 [2024-11-18 10:44:42.492622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.482 ms 00:16:16.803 [2024-11-18 10:44:42.492629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.803 [2024-11-18 10:44:42.493161] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:16.803 [2024-11-18 10:44:42.493179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:16.803 [2024-11-18 10:44:42.493532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.493993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:16.804 [2024-11-18 10:44:42.494057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:16.804 [2024-11-18 10:44:42.494068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:16:16.804 [2024-11-18 10:44:42.494076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:16.804 [2024-11-18 10:44:42.494084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:16.804 [2024-11-18 10:44:42.494091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:16.804 [2024-11-18 10:44:42.494102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:16.804 [2024-11-18 10:44:42.494109] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:16.804 [2024-11-18 10:44:42.494118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:16.804 [2024-11-18 10:44:42.494125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:16.804 [2024-11-18 10:44:42.494133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:16.804 [2024-11-18 10:44:42.494139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:16.804 [2024-11-18 10:44:42.494148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.804 [2024-11-18 10:44:42.494155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:16.804 [2024-11-18 10:44:42.494166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:16:16.804 [2024-11-18 10:44:42.494173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.804 [2024-11-18 10:44:42.506544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.804 [2024-11-18 10:44:42.506573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:16.804 [2024-11-18 10:44:42.506594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.310 ms 00:16:16.804 [2024-11-18 10:44:42.506602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.804 [2024-11-18 10:44:42.506983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.804 [2024-11-18 10:44:42.506999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:16.804 [2024-11-18 10:44:42.507009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:16:16.804 [2024-11-18 10:44:42.507016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.804 [2024-11-18 10:44:42.551051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.804 [2024-11-18 10:44:42.551078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.804 [2024-11-18 10:44:42.551089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.804 [2024-11-18 10:44:42.551096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.804 [2024-11-18 10:44:42.551200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.804 [2024-11-18 10:44:42.551221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.804 [2024-11-18 10:44:42.551230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.804 [2024-11-18 10:44:42.551238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.804 [2024-11-18 10:44:42.551304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.804 [2024-11-18 10:44:42.551315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.804 [2024-11-18 10:44:42.551325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.804 [2024-11-18 10:44:42.551332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.805 [2024-11-18 10:44:42.551366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.805 [2024-11-18 10:44:42.551374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.805 [2024-11-18 10:44:42.551383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.805 [2024-11-18 10:44:42.551390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.805 [2024-11-18 10:44:42.632228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.805 [2024-11-18 10:44:42.632267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.805 [2024-11-18 10:44:42.632279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.805 [2024-11-18 10:44:42.632286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.694961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.694995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:17.064 [2024-11-18 10:44:42.695007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:17.064 [2024-11-18 10:44:42.695134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:17.064 [2024-11-18 10:44:42.695231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:17.064 [2024-11-18 10:44:42.695375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:17.064 [2024-11-18 10:44:42.695457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:17.064 [2024-11-18 10:44:42.695533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:17.064 [2024-11-18 10:44:42.695609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:17.064 [2024-11-18 10:44:42.695618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:17.064 [2024-11-18 10:44:42.695625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.064 [2024-11-18 10:44:42.695814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.147 ms, result 0 00:16:17.064 true 00:16:17.064 10:44:42 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73473 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73473 ']' 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73473 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73473 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.064 killing process with pid 73473 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73473' 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73473 00:16:17.064 10:44:42 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73473 00:16:23.627 10:44:48 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:23.888 65536+0 records in 00:16:23.888 65536+0 records out 00:16:23.888 268435456 bytes (268 MB, 256 MiB) copied, 1.07961 s, 249 MB/s 00:16:23.888 10:44:49 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.149 [2024-11-18 10:44:49.776794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:24.149 [2024-11-18 10:44:49.776943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73658 ] 00:16:24.149 [2024-11-18 10:44:49.938619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.149 [2024-11-18 10:44:50.019723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.410 [2024-11-18 10:44:50.228933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.410 [2024-11-18 10:44:50.228984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.672 [2024-11-18 10:44:50.377218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.672 [2024-11-18 10:44:50.377257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.672 [2024-11-18 10:44:50.377267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.672 [2024-11-18 10:44:50.377273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.672 [2024-11-18 10:44:50.379323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.672 [2024-11-18 10:44:50.379353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.672 [2024-11-18 10:44:50.379361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:16:24.672 [2024-11-18 10:44:50.379367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.672 [2024-11-18 10:44:50.379423] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.672 [2024-11-18 10:44:50.379923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.672 [2024-11-18 10:44:50.379946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.672 [2024-11-18 10:44:50.379952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.672 [2024-11-18 10:44:50.379959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:16:24.672 [2024-11-18 10:44:50.379964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.672 [2024-11-18 10:44:50.381176] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.672 [2024-11-18 10:44:50.390933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.672 [2024-11-18 10:44:50.390969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.673 [2024-11-18 10:44:50.390978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.759 ms 00:16:24.673 [2024-11-18 10:44:50.390985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.391052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.391064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.673 [2024-11-18 10:44:50.391071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:24.673 [2024-11-18 10:44:50.391077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.395502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.395529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.673 [2024-11-18 10:44:50.395536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:16:24.673 [2024-11-18 10:44:50.395542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.395617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.395626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.673 [2024-11-18 10:44:50.395632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:24.673 [2024-11-18 10:44:50.395638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.395655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.395662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.673 [2024-11-18 10:44:50.395669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.673 [2024-11-18 10:44:50.395674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.395693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.673 [2024-11-18 10:44:50.398392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.398417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.673 [2024-11-18 10:44:50.398424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:16:24.673 [2024-11-18 10:44:50.398430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.398457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.398465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.673 [2024-11-18 10:44:50.398471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.673 [2024-11-18 10:44:50.398477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.398490] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.673 [2024-11-18 10:44:50.398505] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:24.673 [2024-11-18 10:44:50.398532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.673 [2024-11-18 10:44:50.398543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:24.673 [2024-11-18 10:44:50.398621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:24.673 [2024-11-18 10:44:50.398629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.673 [2024-11-18 10:44:50.398636] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:24.673 [2024-11-18 10:44:50.398644] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.673 [2024-11-18 10:44:50.398664] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.673 [2024-11-18 10:44:50.398670] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:24.673 [2024-11-18 10:44:50.398675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:24.673 [2024-11-18 10:44:50.398681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.398687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.673 [2024-11-18 10:44:50.398693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:24.673 [2024-11-18 10:44:50.398699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.398765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.673 [2024-11-18 10:44:50.398772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.673 [2024-11-18 10:44:50.398779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:24.673 [2024-11-18 10:44:50.398784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.673 [2024-11-18 10:44:50.398856] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.673 [2024-11-18 10:44:50.398869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.673 [2024-11-18 10:44:50.398876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.673 [2024-11-18 10:44:50.398892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.673 [2024-11-18 10:44:50.398909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.673 [2024-11-18 10:44:50.398919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.673 [2024-11-18 10:44:50.398924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.673 [2024-11-18 10:44:50.398929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.673 [2024-11-18 10:44:50.398939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.673 [2024-11-18 10:44:50.398946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:24.673 [2024-11-18 10:44:50.398951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.673 [2024-11-18 10:44:50.398961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.673 [2024-11-18 10:44:50.398976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.673 [2024-11-18 10:44:50.398986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.673 [2024-11-18 10:44:50.398991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.673 [2024-11-18 10:44:50.398995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.673 [2024-11-18 10:44:50.399000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.673 [2024-11-18 10:44:50.399005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.673 [2024-11-18 10:44:50.399010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.673 [2024-11-18 10:44:50.399015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.673 [2024-11-18 10:44:50.399020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:24.673 [2024-11-18 10:44:50.399025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.673 [2024-11-18 10:44:50.399030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.673 [2024-11-18 10:44:50.399034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:24.673 [2024-11-18 10:44:50.399039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.673 [2024-11-18 10:44:50.399045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.673 [2024-11-18 10:44:50.399050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:24.673 [2024-11-18 10:44:50.399055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.673 [2024-11-18 10:44:50.399060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:24.673 [2024-11-18 10:44:50.399065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:24.674 [2024-11-18 10:44:50.399071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.674 [2024-11-18 10:44:50.399076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:24.674 [2024-11-18 10:44:50.399081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:24.674 [2024-11-18 10:44:50.399085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.674 [2024-11-18 10:44:50.399091] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.674 [2024-11-18 10:44:50.399097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.674 [2024-11-18 10:44:50.399102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.674 [2024-11-18 10:44:50.399110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.674 [2024-11-18 10:44:50.399115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.674 [2024-11-18 10:44:50.399120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.674 [2024-11-18 10:44:50.399125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.674 [2024-11-18 10:44:50.399130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.674 [2024-11-18 10:44:50.399135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.674 [2024-11-18 10:44:50.399140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.674 [2024-11-18 10:44:50.399147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.674 [2024-11-18 10:44:50.399154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.674 [2024-11-18 10:44:50.399166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:24.674 [2024-11-18 10:44:50.399171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.674 [2024-11-18 10:44:50.399177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:24.674 [2024-11-18 10:44:50.399182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:24.674 [2024-11-18 10:44:50.399187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:24.674 [2024-11-18 10:44:50.399192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:24.674 [2024-11-18 10:44:50.399197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:24.674 [2024-11-18 10:44:50.399212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:24.674 [2024-11-18 10:44:50.399219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:24.674 [2024-11-18 10:44:50.399247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.674 [2024-11-18 10:44:50.399253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.674 [2024-11-18 10:44:50.399265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.674 [2024-11-18 10:44:50.399270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.674 [2024-11-18 10:44:50.399275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.674 [2024-11-18 10:44:50.399282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.399288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.674 [2024-11-18 10:44:50.399297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:16:24.674 [2024-11-18 10:44:50.399302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.420710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.420740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.674 [2024-11-18 10:44:50.420748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.370 ms 00:16:24.674 [2024-11-18 10:44:50.420753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.420850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.420861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.674 [2024-11-18 10:44:50.420867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:24.674 [2024-11-18 10:44:50.420873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.458101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.458135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.674 [2024-11-18 10:44:50.458144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.212 ms 00:16:24.674 [2024-11-18 10:44:50.458152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.458220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.458229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.674 [2024-11-18 10:44:50.458236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:24.674 [2024-11-18 10:44:50.458242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.458531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.458555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.674 [2024-11-18 10:44:50.458563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:16:24.674 [2024-11-18 10:44:50.458569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.458677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.458690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.674 [2024-11-18 10:44:50.458697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:24.674 [2024-11-18 10:44:50.458703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.469581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.469607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.674 [2024-11-18 10:44:50.469615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.863 ms 00:16:24.674 [2024-11-18 10:44:50.469622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.479311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:24.674 [2024-11-18 10:44:50.479342] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:24.674 [2024-11-18 10:44:50.479352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.479358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:24.674 [2024-11-18 10:44:50.479365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.639 ms 00:16:24.674 [2024-11-18 10:44:50.479371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.497890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.674 [2024-11-18 10:44:50.497923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:24.674 [2024-11-18 10:44:50.497938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.470 ms 00:16:24.674 [2024-11-18 10:44:50.497945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.674 [2024-11-18 10:44:50.506842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.675 [2024-11-18 10:44:50.506870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:24.675 [2024-11-18 10:44:50.506877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.839 ms 00:16:24.675 [2024-11-18 10:44:50.506883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.675 [2024-11-18 10:44:50.515393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.675 [2024-11-18 10:44:50.515421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:24.675 [2024-11-18 10:44:50.515429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.468 ms 00:16:24.675 [2024-11-18 10:44:50.515434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.675 [2024-11-18 10:44:50.515894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.675 [2024-11-18 10:44:50.515913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.675 [2024-11-18 10:44:50.515920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:16:24.675 [2024-11-18 10:44:50.515925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.559879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.559921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:24.934 [2024-11-18 10:44:50.559931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.936 ms 00:16:24.934 [2024-11-18 10:44:50.559938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.567708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.934 [2024-11-18 10:44:50.579224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.579253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.934 [2024-11-18 10:44:50.579262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.221 ms 00:16:24.934 [2024-11-18 10:44:50.579269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.579338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.579348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:24.934 [2024-11-18 10:44:50.579355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.934 [2024-11-18 10:44:50.579361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.579399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.579406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.934 [2024-11-18 10:44:50.579412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:24.934 [2024-11-18 10:44:50.579418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.579439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.579447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.934 [2024-11-18 10:44:50.579454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.934 [2024-11-18 10:44:50.579460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.579483] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:24.934 [2024-11-18 10:44:50.579490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.934 [2024-11-18 10:44:50.579496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:24.934 [2024-11-18 10:44:50.579502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.934 [2024-11-18 10:44:50.579507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.934 [2024-11-18 10:44:50.597723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.935 [2024-11-18 10:44:50.597757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.935 [2024-11-18 10:44:50.597765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:16:24.935 [2024-11-18 10:44:50.597771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.935 [2024-11-18 10:44:50.597844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.935 [2024-11-18 10:44:50.597852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.935 [2024-11-18 10:44:50.597859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:24.935 [2024-11-18 10:44:50.597865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.935 [2024-11-18 10:44:50.598872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.935 [2024-11-18 10:44:50.601403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.431 ms, result 0 00:16:24.935 [2024-11-18 10:44:50.602184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.935 [2024-11-18 10:44:50.612942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.875  [2024-11-18T10:44:52.701Z] Copying: 49/256 [MB] (49 MBps) [2024-11-18T10:44:53.642Z] Copying: 99/256 [MB] (50 MBps) [2024-11-18T10:44:55.030Z] Copying: 120/256 [MB] (21 MBps) [2024-11-18T10:44:55.974Z] Copying: 139/256 [MB] (18 MBps) [2024-11-18T10:44:56.917Z] Copying: 152864/262144 [kB] (10168 kBps) [2024-11-18T10:44:57.858Z] Copying: 163056/262144 [kB] (10192 kBps) [2024-11-18T10:44:58.876Z] Copying: 173/256 [MB] (14 MBps) [2024-11-18T10:44:59.819Z] Copying: 199/256 [MB] (25 MBps) [2024-11-18T10:45:00.765Z] Copying: 214/256 [MB] (14 MBps) [2024-11-18T10:45:01.712Z] Copying: 226/256 [MB] (12 MBps) [2024-11-18T10:45:02.659Z] Copying: 247/256 [MB] (20 MBps) [2024-11-18T10:45:02.659Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-18 10:45:02.463330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:36.775 [2024-11-18 10:45:02.473545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.473598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:36.775 [2024-11-18 10:45:02.473613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:36.775 [2024-11-18 10:45:02.473622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.473647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:36.775 [2024-11-18 10:45:02.476594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.476644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:36.775 [2024-11-18 10:45:02.476655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.932 ms 00:16:36.775 [2024-11-18 10:45:02.476664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.479668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.479715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:36.775 [2024-11-18 10:45:02.479726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:16:36.775 [2024-11-18 10:45:02.479735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.487885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.487928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:36.775 [2024-11-18 10:45:02.487946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.133 ms 00:16:36.775 [2024-11-18 10:45:02.487954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.494934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.494977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:36.775 [2024-11-18 10:45:02.494988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:16:36.775 [2024-11-18 10:45:02.494997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.520505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.520553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:36.775 [2024-11-18 10:45:02.520566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.447 ms 00:16:36.775 [2024-11-18 10:45:02.520575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.536343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.536387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:36.775 [2024-11-18 10:45:02.536420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.719 ms 00:16:36.775 [2024-11-18 10:45:02.536433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.536582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.536595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:36.775 [2024-11-18 10:45:02.536605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:36.775 [2024-11-18 10:45:02.536615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.775 [2024-11-18 10:45:02.562390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.775 [2024-11-18 10:45:02.562434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:36.775 [2024-11-18 10:45:02.562446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:16:36.775 [2024-11-18 10:45:02.562454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.776 [2024-11-18 10:45:02.587733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.776 [2024-11-18 10:45:02.587776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:36.776 [2024-11-18 10:45:02.587787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.218 ms 00:16:36.776 [2024-11-18 10:45:02.587794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.776 [2024-11-18 10:45:02.612276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.776 [2024-11-18 10:45:02.612320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:36.776 [2024-11-18 10:45:02.612333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.415 ms 00:16:36.776 [2024-11-18 10:45:02.612341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.776 [2024-11-18 10:45:02.636886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.776 [2024-11-18 10:45:02.636930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:36.776 [2024-11-18 10:45:02.636941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.443 ms 00:16:36.776 [2024-11-18 10:45:02.636948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.776 [2024-11-18 10:45:02.636993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:36.776 [2024-11-18 10:45:02.637016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:36.776 [2024-11-18 10:45:02.637657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:36.777 [2024-11-18 10:45:02.637841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:36.777 [2024-11-18 10:45:02.637850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:16:36.777 [2024-11-18 10:45:02.637858] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:36.777 [2024-11-18 10:45:02.637868] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:36.777 [2024-11-18 10:45:02.637876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:36.777 [2024-11-18 10:45:02.637885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:36.777 [2024-11-18 10:45:02.637892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:36.777 [2024-11-18 10:45:02.637900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:36.777 [2024-11-18 10:45:02.637907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:36.777 [2024-11-18 10:45:02.637914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:36.777 [2024-11-18 10:45:02.637920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:36.777 [2024-11-18 10:45:02.637927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.777 [2024-11-18 10:45:02.637935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:36.777 [2024-11-18 10:45:02.637947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:16:36.777 [2024-11-18 10:45:02.637954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.777 [2024-11-18 10:45:02.651323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.777 [2024-11-18 10:45:02.651365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:36.777 [2024-11-18 10:45:02.651378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.335 ms 00:16:36.777 [2024-11-18 10:45:02.651386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.777 [2024-11-18 10:45:02.651784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.777 [2024-11-18 10:45:02.651819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:36.777 [2024-11-18 10:45:02.651830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:16:36.777 [2024-11-18 10:45:02.651839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.690765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.690811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.039 [2024-11-18 10:45:02.690822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.690831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.690936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.690950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.039 [2024-11-18 10:45:02.690958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.690967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.691018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.691028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.039 [2024-11-18 10:45:02.691037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.691045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.691064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.691072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.039 [2024-11-18 10:45:02.691085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.691094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.775919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.775974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.039 [2024-11-18 10:45:02.775987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.775995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.039 [2024-11-18 10:45:02.846100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.039 [2024-11-18 10:45:02.846236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.039 [2024-11-18 10:45:02.846299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.039 [2024-11-18 10:45:02.846427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.039 [2024-11-18 10:45:02.846488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.039 [2024-11-18 10:45:02.846564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.039 [2024-11-18 10:45:02.846632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.039 [2024-11-18 10:45:02.846641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.039 [2024-11-18 10:45:02.846654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.039 [2024-11-18 10:45:02.846812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.257 ms, result 0 00:16:38.426 00:16:38.426 00:16:38.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.426 10:45:03 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73811 00:16:38.426 10:45:03 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73811 00:16:38.426 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73811 ']' 00:16:38.426 10:45:03 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:38.426 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.427 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.427 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.427 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.427 10:45:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.427 [2024-11-18 10:45:04.008345] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:38.427 [2024-11-18 10:45:04.008510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73811 ] 00:16:38.427 [2024-11-18 10:45:04.174446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.427 [2024-11-18 10:45:04.300339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.369 10:45:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.369 10:45:04 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:39.369 10:45:04 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:39.369 [2024-11-18 10:45:05.199479] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.369 [2024-11-18 10:45:05.199558] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.632 [2024-11-18 10:45:05.377880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.377943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:39.632 [2024-11-18 10:45:05.377961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.632 [2024-11-18 10:45:05.377970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.380983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.381034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.632 [2024-11-18 10:45:05.381047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:16:39.632 [2024-11-18 10:45:05.381055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.381170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:39.632 [2024-11-18 10:45:05.382305] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:39.632 [2024-11-18 10:45:05.382362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.382373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.632 [2024-11-18 10:45:05.382385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:16:39.632 [2024-11-18 10:45:05.382393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.384150] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:39.632 [2024-11-18 10:45:05.398491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.398547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:39.632 [2024-11-18 10:45:05.398562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.349 ms 00:16:39.632 [2024-11-18 10:45:05.398573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.398686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.398701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:39.632 [2024-11-18 10:45:05.398711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:39.632 [2024-11-18 10:45:05.398722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.406651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.406704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.632 [2024-11-18 10:45:05.406715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.875 ms 00:16:39.632 [2024-11-18 10:45:05.406725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.406839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.406855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.632 [2024-11-18 10:45:05.406864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:39.632 [2024-11-18 10:45:05.406875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.406909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.406920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:39.632 [2024-11-18 10:45:05.406928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:39.632 [2024-11-18 10:45:05.406937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.406962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:39.632 [2024-11-18 10:45:05.410839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.410879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.632 [2024-11-18 10:45:05.410892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.881 ms 00:16:39.632 [2024-11-18 10:45:05.410900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.410977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.410987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:39.632 [2024-11-18 10:45:05.410998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:39.632 [2024-11-18 10:45:05.411009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.411032] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:39.632 [2024-11-18 10:45:05.411052] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:39.632 [2024-11-18 10:45:05.411097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:39.632 [2024-11-18 10:45:05.411114] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:39.632 [2024-11-18 10:45:05.411238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:39.632 [2024-11-18 10:45:05.411252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:39.632 [2024-11-18 10:45:05.411268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:39.632 [2024-11-18 10:45:05.411282] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:39.632 [2024-11-18 10:45:05.411295] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:39.632 [2024-11-18 10:45:05.411303] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:39.632 [2024-11-18 10:45:05.411313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:39.632 [2024-11-18 10:45:05.411321] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:39.632 [2024-11-18 10:45:05.411337] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:39.632 [2024-11-18 10:45:05.411346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.411356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:39.632 [2024-11-18 10:45:05.411364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:39.632 [2024-11-18 10:45:05.411375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.411466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.632 [2024-11-18 10:45:05.411486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:39.632 [2024-11-18 10:45:05.411494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:39.632 [2024-11-18 10:45:05.411505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.632 [2024-11-18 10:45:05.411612] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:39.632 [2024-11-18 10:45:05.411633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:39.632 [2024-11-18 10:45:05.411642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.632 [2024-11-18 10:45:05.411652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.632 [2024-11-18 10:45:05.411660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:39.632 [2024-11-18 10:45:05.411670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:39.632 [2024-11-18 10:45:05.411678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:39.632 [2024-11-18 10:45:05.411691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:39.632 [2024-11-18 10:45:05.411699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:39.632 [2024-11-18 10:45:05.411709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.632 [2024-11-18 10:45:05.411717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:39.632 [2024-11-18 10:45:05.411728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:39.632 [2024-11-18 10:45:05.411735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.632 [2024-11-18 10:45:05.411744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:39.632 [2024-11-18 10:45:05.411751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:39.632 [2024-11-18 10:45:05.411761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.632 [2024-11-18 10:45:05.411772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:39.632 [2024-11-18 10:45:05.411782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:39.633 [2024-11-18 10:45:05.411789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:39.633 [2024-11-18 10:45:05.411810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.633 [2024-11-18 10:45:05.411827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:39.633 [2024-11-18 10:45:05.411839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.633 [2024-11-18 10:45:05.411854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:39.633 [2024-11-18 10:45:05.411861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.633 [2024-11-18 10:45:05.411878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:39.633 [2024-11-18 10:45:05.411886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.633 [2024-11-18 10:45:05.411902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:39.633 [2024-11-18 10:45:05.411909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.633 [2024-11-18 10:45:05.411924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:39.633 [2024-11-18 10:45:05.411934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:39.633 [2024-11-18 10:45:05.411941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.633 [2024-11-18 10:45:05.411949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:39.633 [2024-11-18 10:45:05.411956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:39.633 [2024-11-18 10:45:05.411966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:39.633 [2024-11-18 10:45:05.411982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:39.633 [2024-11-18 10:45:05.411988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.633 [2024-11-18 10:45:05.411997] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:39.633 [2024-11-18 10:45:05.412004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:39.633 [2024-11-18 10:45:05.412016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.633 [2024-11-18 10:45:05.412024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.633 [2024-11-18 10:45:05.412033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:39.633 [2024-11-18 10:45:05.412041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:39.633 [2024-11-18 10:45:05.412051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:39.633 [2024-11-18 10:45:05.412058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:39.633 [2024-11-18 10:45:05.412066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:39.633 [2024-11-18 10:45:05.412074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:39.633 [2024-11-18 10:45:05.412085] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:39.633 [2024-11-18 10:45:05.412095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:39.633 [2024-11-18 10:45:05.412117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:39.633 [2024-11-18 10:45:05.412138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:39.633 [2024-11-18 10:45:05.412146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:39.633 [2024-11-18 10:45:05.412155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:39.633 [2024-11-18 10:45:05.412164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:39.633 [2024-11-18 10:45:05.412174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:39.633 [2024-11-18 10:45:05.412181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:39.633 [2024-11-18 10:45:05.412190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:39.633 [2024-11-18 10:45:05.412197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:39.633 [2024-11-18 10:45:05.412261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:39.633 [2024-11-18 10:45:05.412272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:39.633 [2024-11-18 10:45:05.412291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:39.633 [2024-11-18 10:45:05.412300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:39.633 [2024-11-18 10:45:05.412307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:39.633 [2024-11-18 10:45:05.412317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.412325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:39.633 [2024-11-18 10:45:05.412336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:16:39.633 [2024-11-18 10:45:05.412343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.445061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.445111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.633 [2024-11-18 10:45:05.445125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.643 ms 00:16:39.633 [2024-11-18 10:45:05.445134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.445287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.445300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.633 [2024-11-18 10:45:05.445311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:39.633 [2024-11-18 10:45:05.445320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.480046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.480091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.633 [2024-11-18 10:45:05.480109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.700 ms 00:16:39.633 [2024-11-18 10:45:05.480117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.480219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.480230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.633 [2024-11-18 10:45:05.480241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.633 [2024-11-18 10:45:05.480249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.480818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.480853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.633 [2024-11-18 10:45:05.480868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:16:39.633 [2024-11-18 10:45:05.480876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.481028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.481039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.633 [2024-11-18 10:45:05.481049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:39.633 [2024-11-18 10:45:05.481058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.498607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.498649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.633 [2024-11-18 10:45:05.498663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.522 ms 00:16:39.633 [2024-11-18 10:45:05.498670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.633 [2024-11-18 10:45:05.512741] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:39.633 [2024-11-18 10:45:05.512791] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:39.633 [2024-11-18 10:45:05.512807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.633 [2024-11-18 10:45:05.512815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:39.633 [2024-11-18 10:45:05.512827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.022 ms 00:16:39.633 [2024-11-18 10:45:05.512835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.538750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.538799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:39.895 [2024-11-18 10:45:05.538814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.818 ms 00:16:39.895 [2024-11-18 10:45:05.538822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.551964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.552010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:39.895 [2024-11-18 10:45:05.552028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.041 ms 00:16:39.895 [2024-11-18 10:45:05.552036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.564516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.564577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:39.895 [2024-11-18 10:45:05.564592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:16:39.895 [2024-11-18 10:45:05.564600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.565272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.565306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.895 [2024-11-18 10:45:05.565319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:16:39.895 [2024-11-18 10:45:05.565327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.639808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.639878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:39.895 [2024-11-18 10:45:05.639899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.450 ms 00:16:39.895 [2024-11-18 10:45:05.639909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.651022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.895 [2024-11-18 10:45:05.669526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.669581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.895 [2024-11-18 10:45:05.669597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.511 ms 00:16:39.895 [2024-11-18 10:45:05.669607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.669693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.669706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:39.895 [2024-11-18 10:45:05.669716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:39.895 [2024-11-18 10:45:05.669727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.669784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.669797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.895 [2024-11-18 10:45:05.669807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:39.895 [2024-11-18 10:45:05.669817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.669846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.669857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.895 [2024-11-18 10:45:05.669865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:39.895 [2024-11-18 10:45:05.669877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.669910] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:39.895 [2024-11-18 10:45:05.669925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.669933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:39.895 [2024-11-18 10:45:05.669947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:39.895 [2024-11-18 10:45:05.669954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.696282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.696334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.895 [2024-11-18 10:45:05.696350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.298 ms 00:16:39.895 [2024-11-18 10:45:05.696359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.696499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.895 [2024-11-18 10:45:05.696512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.895 [2024-11-18 10:45:05.696524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:39.895 [2024-11-18 10:45:05.696535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.895 [2024-11-18 10:45:05.697711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.895 [2024-11-18 10:45:05.701171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.507 ms, result 0 00:16:39.895 [2024-11-18 10:45:05.705451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.895 Some configs were skipped because the RPC state that can call them passed over. 00:16:39.895 10:45:05 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:40.158 [2024-11-18 10:45:05.954078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.158 [2024-11-18 10:45:05.954150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:40.158 [2024-11-18 10:45:05.954164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:16:40.158 [2024-11-18 10:45:05.954176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.158 [2024-11-18 10:45:05.954226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.255 ms, result 0 00:16:40.158 true 00:16:40.158 10:45:05 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:40.419 [2024-11-18 10:45:06.170173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.419 [2024-11-18 10:45:06.170244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:40.419 [2024-11-18 10:45:06.170260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:16:40.419 [2024-11-18 10:45:06.170269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.419 [2024-11-18 10:45:06.170309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.017 ms, result 0 00:16:40.419 true 00:16:40.419 10:45:06 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73811 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73811 ']' 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73811 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73811 00:16:40.419 killing process with pid 73811 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73811' 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73811 00:16:40.419 10:45:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73811 00:16:40.991 [2024-11-18 10:45:06.861501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.991 [2024-11-18 10:45:06.861549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.991 [2024-11-18 10:45:06.861559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.991 [2024-11-18 10:45:06.861567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.991 [2024-11-18 10:45:06.861584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.991 [2024-11-18 10:45:06.863666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.991 [2024-11-18 10:45:06.863692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.991 [2024-11-18 10:45:06.863703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:16:40.991 [2024-11-18 10:45:06.863709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.991 [2024-11-18 10:45:06.863930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.991 [2024-11-18 10:45:06.863943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.991 [2024-11-18 10:45:06.863952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:40.991 [2024-11-18 10:45:06.863958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.991 [2024-11-18 10:45:06.867245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.991 [2024-11-18 10:45:06.867271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.991 [2024-11-18 10:45:06.867281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:16:40.991 [2024-11-18 10:45:06.867287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.991 [2024-11-18 10:45:06.872533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.991 [2024-11-18 10:45:06.872559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.991 [2024-11-18 10:45:06.872568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:16:40.991 [2024-11-18 10:45:06.872574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.880032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.880057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.254 [2024-11-18 10:45:06.880067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.414 ms 00:16:41.254 [2024-11-18 10:45:06.880078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.886469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.886498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.254 [2024-11-18 10:45:06.886510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:16:41.254 [2024-11-18 10:45:06.886516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.886623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.886630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.254 [2024-11-18 10:45:06.886638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:41.254 [2024-11-18 10:45:06.886644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.894312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.894337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:41.254 [2024-11-18 10:45:06.894345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.652 ms 00:16:41.254 [2024-11-18 10:45:06.894351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.901881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.901906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:41.254 [2024-11-18 10:45:06.901916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.501 ms 00:16:41.254 [2024-11-18 10:45:06.901922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.908674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.908698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.254 [2024-11-18 10:45:06.908709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.722 ms 00:16:41.254 [2024-11-18 10:45:06.908714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.915806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.254 [2024-11-18 10:45:06.915838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.254 [2024-11-18 10:45:06.915846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.043 ms 00:16:41.254 [2024-11-18 10:45:06.915851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.254 [2024-11-18 10:45:06.915878] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.254 [2024-11-18 10:45:06.915889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.915999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.254 [2024-11-18 10:45:06.916231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.255 [2024-11-18 10:45:06.916566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.255 [2024-11-18 10:45:06.916577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:16:41.255 [2024-11-18 10:45:06.916589] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.255 [2024-11-18 10:45:06.916597] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.255 [2024-11-18 10:45:06.916603] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.255 [2024-11-18 10:45:06.916610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.255 [2024-11-18 10:45:06.916616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.255 [2024-11-18 10:45:06.916623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.255 [2024-11-18 10:45:06.916630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.255 [2024-11-18 10:45:06.916636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.255 [2024-11-18 10:45:06.916642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.255 [2024-11-18 10:45:06.916649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.255 [2024-11-18 10:45:06.916655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.255 [2024-11-18 10:45:06.916663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:16:41.255 [2024-11-18 10:45:06.916668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.926396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.255 [2024-11-18 10:45:06.926419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.255 [2024-11-18 10:45:06.926430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:16:41.255 [2024-11-18 10:45:06.926436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.926719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.255 [2024-11-18 10:45:06.926728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.255 [2024-11-18 10:45:06.926735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:41.255 [2024-11-18 10:45:06.926742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.961593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:06.961621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.255 [2024-11-18 10:45:06.961631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:06.961638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.962605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:06.962628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.255 [2024-11-18 10:45:06.962636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:06.962644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.962680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:06.962688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.255 [2024-11-18 10:45:06.962696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:06.962702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:06.962717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:06.962723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.255 [2024-11-18 10:45:06.962730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:06.962735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:07.022328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:07.022368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.255 [2024-11-18 10:45:07.022378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:07.022385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:07.070567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:07.070597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.255 [2024-11-18 10:45:07.070607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:07.070615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.255 [2024-11-18 10:45:07.070673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.255 [2024-11-18 10:45:07.070681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.255 [2024-11-18 10:45:07.070691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.255 [2024-11-18 10:45:07.070696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.070720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.256 [2024-11-18 10:45:07.070726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.256 [2024-11-18 10:45:07.070733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.256 [2024-11-18 10:45:07.070739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.070807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.256 [2024-11-18 10:45:07.070814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.256 [2024-11-18 10:45:07.070822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.256 [2024-11-18 10:45:07.070828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.070852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.256 [2024-11-18 10:45:07.070859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.256 [2024-11-18 10:45:07.070865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.256 [2024-11-18 10:45:07.070871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.070900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.256 [2024-11-18 10:45:07.070908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.256 [2024-11-18 10:45:07.070917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.256 [2024-11-18 10:45:07.070923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.070957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.256 [2024-11-18 10:45:07.070964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.256 [2024-11-18 10:45:07.070972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.256 [2024-11-18 10:45:07.070979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.256 [2024-11-18 10:45:07.071080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.562 ms, result 0 00:16:41.828 10:45:07 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:41.828 10:45:07 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.828 [2024-11-18 10:45:07.642073] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:41.828 [2024-11-18 10:45:07.642192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:16:42.089 [2024-11-18 10:45:07.798093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.089 [2024-11-18 10:45:07.872702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.350 [2024-11-18 10:45:08.078199] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.350 [2024-11-18 10:45:08.078252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.350 [2024-11-18 10:45:08.226040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.350 [2024-11-18 10:45:08.226167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.350 [2024-11-18 10:45:08.226183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.350 [2024-11-18 10:45:08.226190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.350 [2024-11-18 10:45:08.228256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.350 [2024-11-18 10:45:08.228282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.350 [2024-11-18 10:45:08.228290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:16:42.350 [2024-11-18 10:45:08.228296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.350 [2024-11-18 10:45:08.228350] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.350 [2024-11-18 10:45:08.228867] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.350 [2024-11-18 10:45:08.228883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.350 [2024-11-18 10:45:08.228889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.350 [2024-11-18 10:45:08.228896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:16:42.350 [2024-11-18 10:45:08.228901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.350 [2024-11-18 10:45:08.230002] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:42.613 [2024-11-18 10:45:08.239448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.239478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:42.613 [2024-11-18 10:45:08.239487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.446 ms 00:16:42.613 [2024-11-18 10:45:08.239493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.239557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.239566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:42.613 [2024-11-18 10:45:08.239573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:42.613 [2024-11-18 10:45:08.239578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.243835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.243859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.613 [2024-11-18 10:45:08.243866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.228 ms 00:16:42.613 [2024-11-18 10:45:08.243872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.243943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.243951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.613 [2024-11-18 10:45:08.243958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:42.613 [2024-11-18 10:45:08.243963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.243979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.243986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.613 [2024-11-18 10:45:08.243992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.613 [2024-11-18 10:45:08.243998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.244015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.613 [2024-11-18 10:45:08.246732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.246753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.613 [2024-11-18 10:45:08.246760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:16:42.613 [2024-11-18 10:45:08.246766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.246793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.246799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.613 [2024-11-18 10:45:08.246806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:42.613 [2024-11-18 10:45:08.246812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.246825] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:42.613 [2024-11-18 10:45:08.246840] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:42.613 [2024-11-18 10:45:08.246866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:42.613 [2024-11-18 10:45:08.246878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:42.613 [2024-11-18 10:45:08.246958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.613 [2024-11-18 10:45:08.246965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.613 [2024-11-18 10:45:08.246973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:42.613 [2024-11-18 10:45:08.246980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.613 [2024-11-18 10:45:08.246990] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.613 [2024-11-18 10:45:08.246996] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.613 [2024-11-18 10:45:08.247002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.613 [2024-11-18 10:45:08.247007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.613 [2024-11-18 10:45:08.247012] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.613 [2024-11-18 10:45:08.247018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.247023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.613 [2024-11-18 10:45:08.247029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:42.613 [2024-11-18 10:45:08.247034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.247101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.613 [2024-11-18 10:45:08.247107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.613 [2024-11-18 10:45:08.247115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:42.613 [2024-11-18 10:45:08.247120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.613 [2024-11-18 10:45:08.247192] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.613 [2024-11-18 10:45:08.247199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.613 [2024-11-18 10:45:08.247219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.613 [2024-11-18 10:45:08.247225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.613 [2024-11-18 10:45:08.247231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.613 [2024-11-18 10:45:08.247236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.613 [2024-11-18 10:45:08.247241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.613 [2024-11-18 10:45:08.247247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.613 [2024-11-18 10:45:08.247252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.613 [2024-11-18 10:45:08.247258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.613 [2024-11-18 10:45:08.247263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.613 [2024-11-18 10:45:08.247268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.613 [2024-11-18 10:45:08.247274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.613 [2024-11-18 10:45:08.247284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.613 [2024-11-18 10:45:08.247289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:42.614 [2024-11-18 10:45:08.247295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.614 [2024-11-18 10:45:08.247306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.614 [2024-11-18 10:45:08.247321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.614 [2024-11-18 10:45:08.247337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.614 [2024-11-18 10:45:08.247352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.614 [2024-11-18 10:45:08.247367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.614 [2024-11-18 10:45:08.247381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.614 [2024-11-18 10:45:08.247392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.614 [2024-11-18 10:45:08.247397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:42.614 [2024-11-18 10:45:08.247407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.614 [2024-11-18 10:45:08.247412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.614 [2024-11-18 10:45:08.247417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:42.614 [2024-11-18 10:45:08.247422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.614 [2024-11-18 10:45:08.247432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:42.614 [2024-11-18 10:45:08.247437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247441] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.614 [2024-11-18 10:45:08.247447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.614 [2024-11-18 10:45:08.247453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.614 [2024-11-18 10:45:08.247466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.614 [2024-11-18 10:45:08.247471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.614 [2024-11-18 10:45:08.247476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.614 [2024-11-18 10:45:08.247481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.614 [2024-11-18 10:45:08.247487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.614 [2024-11-18 10:45:08.247492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.614 [2024-11-18 10:45:08.247498] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.614 [2024-11-18 10:45:08.247504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.614 [2024-11-18 10:45:08.247516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:42.614 [2024-11-18 10:45:08.247521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.614 [2024-11-18 10:45:08.247527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:42.614 [2024-11-18 10:45:08.247532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:42.614 [2024-11-18 10:45:08.247537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:42.614 [2024-11-18 10:45:08.247542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:42.614 [2024-11-18 10:45:08.247548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:42.614 [2024-11-18 10:45:08.247553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:42.614 [2024-11-18 10:45:08.247558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:42.614 [2024-11-18 10:45:08.247585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.614 [2024-11-18 10:45:08.247591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.614 [2024-11-18 10:45:08.247602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.614 [2024-11-18 10:45:08.247608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.614 [2024-11-18 10:45:08.247613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.614 [2024-11-18 10:45:08.247619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.247625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.614 [2024-11-18 10:45:08.247632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:16:42.614 [2024-11-18 10:45:08.247638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.268257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.268370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.614 [2024-11-18 10:45:08.268383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.582 ms 00:16:42.614 [2024-11-18 10:45:08.268389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.268488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.268499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.614 [2024-11-18 10:45:08.268506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:42.614 [2024-11-18 10:45:08.268512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.307705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.307735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.614 [2024-11-18 10:45:08.307744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.177 ms 00:16:42.614 [2024-11-18 10:45:08.307752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.307807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.307816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.614 [2024-11-18 10:45:08.307823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:42.614 [2024-11-18 10:45:08.307829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.308110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.308122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.614 [2024-11-18 10:45:08.308129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:42.614 [2024-11-18 10:45:08.308135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.308268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.308276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.614 [2024-11-18 10:45:08.308283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:42.614 [2024-11-18 10:45:08.308289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.319010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.319037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.614 [2024-11-18 10:45:08.319044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.707 ms 00:16:42.614 [2024-11-18 10:45:08.319050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.328866] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:42.614 [2024-11-18 10:45:08.328893] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:42.614 [2024-11-18 10:45:08.328902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.328909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:42.614 [2024-11-18 10:45:08.328915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.763 ms 00:16:42.614 [2024-11-18 10:45:08.328921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.614 [2024-11-18 10:45:08.347306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.614 [2024-11-18 10:45:08.347416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:42.615 [2024-11-18 10:45:08.347429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.340 ms 00:16:42.615 [2024-11-18 10:45:08.347435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.356343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.356367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:42.615 [2024-11-18 10:45:08.356375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.857 ms 00:16:42.615 [2024-11-18 10:45:08.356380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.365128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.365151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:42.615 [2024-11-18 10:45:08.365158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.703 ms 00:16:42.615 [2024-11-18 10:45:08.365163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.365631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.365650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.615 [2024-11-18 10:45:08.365657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:16:42.615 [2024-11-18 10:45:08.365662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.409454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.409492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:42.615 [2024-11-18 10:45:08.409501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.773 ms 00:16:42.615 [2024-11-18 10:45:08.409507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.417332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.615 [2024-11-18 10:45:08.428738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.428767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.615 [2024-11-18 10:45:08.428777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.162 ms 00:16:42.615 [2024-11-18 10:45:08.428783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.428858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.428867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:42.615 [2024-11-18 10:45:08.428873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:42.615 [2024-11-18 10:45:08.428879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.428913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.428920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.615 [2024-11-18 10:45:08.428926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:42.615 [2024-11-18 10:45:08.428931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.428952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.428961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.615 [2024-11-18 10:45:08.428968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.615 [2024-11-18 10:45:08.428973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.428996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:42.615 [2024-11-18 10:45:08.429003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.429008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:42.615 [2024-11-18 10:45:08.429014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:42.615 [2024-11-18 10:45:08.429020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.446646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.446747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.615 [2024-11-18 10:45:08.446760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:16:42.615 [2024-11-18 10:45:08.446766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.446835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.615 [2024-11-18 10:45:08.446843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.615 [2024-11-18 10:45:08.446851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:42.615 [2024-11-18 10:45:08.446856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.615 [2024-11-18 10:45:08.447491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.615 [2024-11-18 10:45:08.449780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.231 ms, result 0 00:16:42.615 [2024-11-18 10:45:08.450330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.615 [2024-11-18 10:45:08.464989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.000  [2024-11-18T10:45:10.828Z] Copying: 19/256 [MB] (19 MBps) [2024-11-18T10:45:11.769Z] Copying: 38/256 [MB] (19 MBps) [2024-11-18T10:45:12.708Z] Copying: 57/256 [MB] (19 MBps) [2024-11-18T10:45:13.651Z] Copying: 78/256 [MB] (20 MBps) [2024-11-18T10:45:14.595Z] Copying: 94/256 [MB] (15 MBps) [2024-11-18T10:45:15.539Z] Copying: 104/256 [MB] (10 MBps) [2024-11-18T10:45:16.483Z] Copying: 118/256 [MB] (14 MBps) [2024-11-18T10:45:17.869Z] Copying: 139/256 [MB] (20 MBps) [2024-11-18T10:45:18.811Z] Copying: 150/256 [MB] (11 MBps) [2024-11-18T10:45:19.766Z] Copying: 182/256 [MB] (31 MBps) [2024-11-18T10:45:20.708Z] Copying: 199/256 [MB] (17 MBps) [2024-11-18T10:45:21.652Z] Copying: 216/256 [MB] (17 MBps) [2024-11-18T10:45:22.595Z] Copying: 237/256 [MB] (20 MBps) [2024-11-18T10:45:22.595Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-18 10:45:22.428620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.711 [2024-11-18 10:45:22.439015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.711 [2024-11-18 10:45:22.439229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.711 [2024-11-18 10:45:22.439370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.711 [2024-11-18 10:45:22.439409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.711 [2024-11-18 10:45:22.439457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:56.711 [2024-11-18 10:45:22.442474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.711 [2024-11-18 10:45:22.442641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.711 [2024-11-18 10:45:22.442715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:16:56.712 [2024-11-18 10:45:22.442738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.443023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.443231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.712 [2024-11-18 10:45:22.443262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:16:56.712 [2024-11-18 10:45:22.443282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.446994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.447106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.712 [2024-11-18 10:45:22.447160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:16:56.712 [2024-11-18 10:45:22.447185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.454140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.454330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:56.712 [2024-11-18 10:45:22.454626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.904 ms 00:16:56.712 [2024-11-18 10:45:22.454652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.480678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.480857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.712 [2024-11-18 10:45:22.480916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.945 ms 00:16:56.712 [2024-11-18 10:45:22.480938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.497218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.497399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.712 [2024-11-18 10:45:22.497461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.138 ms 00:16:56.712 [2024-11-18 10:45:22.497492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.497679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.497709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.712 [2024-11-18 10:45:22.497780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:56.712 [2024-11-18 10:45:22.497803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.524549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.524722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:56.712 [2024-11-18 10:45:22.524788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.701 ms 00:16:56.712 [2024-11-18 10:45:22.524809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.551255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.551426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:56.712 [2024-11-18 10:45:22.551485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.291 ms 00:16:56.712 [2024-11-18 10:45:22.551506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.712 [2024-11-18 10:45:22.576981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.712 [2024-11-18 10:45:22.577166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.712 [2024-11-18 10:45:22.577242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.407 ms 00:16:56.712 [2024-11-18 10:45:22.577266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.975 [2024-11-18 10:45:22.602795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.975 [2024-11-18 10:45:22.602978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.976 [2024-11-18 10:45:22.603038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.430 ms 00:16:56.976 [2024-11-18 10:45:22.603061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.976 [2024-11-18 10:45:22.603150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.976 [2024-11-18 10:45:22.603184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.603987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.976 [2024-11-18 10:45:22.604893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.604984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.977 [2024-11-18 10:45:22.605051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.977 [2024-11-18 10:45:22.605060] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:16:56.977 [2024-11-18 10:45:22.605069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.977 [2024-11-18 10:45:22.605077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.977 [2024-11-18 10:45:22.605085] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.977 [2024-11-18 10:45:22.605093] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.977 [2024-11-18 10:45:22.605100] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.977 [2024-11-18 10:45:22.605108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.977 [2024-11-18 10:45:22.605116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.977 [2024-11-18 10:45:22.605122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.977 [2024-11-18 10:45:22.605129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.977 [2024-11-18 10:45:22.605137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.977 [2024-11-18 10:45:22.605148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.977 [2024-11-18 10:45:22.605159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:16:56.977 [2024-11-18 10:45:22.605167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.618769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.977 [2024-11-18 10:45:22.618946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.977 [2024-11-18 10:45:22.618965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.572 ms 00:16:56.977 [2024-11-18 10:45:22.618973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.619411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.977 [2024-11-18 10:45:22.619432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.977 [2024-11-18 10:45:22.619444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:16:56.977 [2024-11-18 10:45:22.619452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.658696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.658749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.977 [2024-11-18 10:45:22.658761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.658770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.658884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.658895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.977 [2024-11-18 10:45:22.658904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.658912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.658966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.658976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.977 [2024-11-18 10:45:22.658985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.658993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.659012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.659024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.977 [2024-11-18 10:45:22.659033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.659041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.745010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.745068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.977 [2024-11-18 10:45:22.745083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.745092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.815529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.815594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.977 [2024-11-18 10:45:22.815607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.815615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.815698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.815708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.977 [2024-11-18 10:45:22.815718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.815726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.815758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.815767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.977 [2024-11-18 10:45:22.815779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.815786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.815882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.815893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.977 [2024-11-18 10:45:22.815902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.815910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.815945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.815956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.977 [2024-11-18 10:45:22.815965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.815977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.816021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.816031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.977 [2024-11-18 10:45:22.816040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.816049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.816096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.977 [2024-11-18 10:45:22.816107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.977 [2024-11-18 10:45:22.816119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.977 [2024-11-18 10:45:22.816127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.977 [2024-11-18 10:45:22.816319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.309 ms, result 0 00:16:57.920 00:16:57.920 00:16:57.920 10:45:23 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:57.920 10:45:23 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:58.491 10:45:24 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.491 [2024-11-18 10:45:24.222194] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:58.491 [2024-11-18 10:45:24.222351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74041 ] 00:16:58.751 [2024-11-18 10:45:24.387199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.751 [2024-11-18 10:45:24.509394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.012 [2024-11-18 10:45:24.808044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.012 [2024-11-18 10:45:24.808419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.277 [2024-11-18 10:45:24.969550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.277 [2024-11-18 10:45:24.969620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.277 [2024-11-18 10:45:24.969636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.277 [2024-11-18 10:45:24.969645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.277 [2024-11-18 10:45:24.972707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.277 [2024-11-18 10:45:24.972763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.278 [2024-11-18 10:45:24.972774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:16:59.278 [2024-11-18 10:45:24.972782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.972911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.278 [2024-11-18 10:45:24.973680] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.278 [2024-11-18 10:45:24.973712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.973721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.278 [2024-11-18 10:45:24.973732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:16:59.278 [2024-11-18 10:45:24.973740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.975616] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:59.278 [2024-11-18 10:45:24.990119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.990177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:59.278 [2024-11-18 10:45:24.990192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:16:59.278 [2024-11-18 10:45:24.990200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.990360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.990374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:59.278 [2024-11-18 10:45:24.990384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:59.278 [2024-11-18 10:45:24.990392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.998815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.998868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.278 [2024-11-18 10:45:24.998878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.374 ms 00:16:59.278 [2024-11-18 10:45:24.998887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.998998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.999008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.278 [2024-11-18 10:45:24.999018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:59.278 [2024-11-18 10:45:24.999026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.999053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:24.999064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.278 [2024-11-18 10:45:24.999073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:59.278 [2024-11-18 10:45:24.999081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:24.999102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.278 [2024-11-18 10:45:25.003286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:25.003330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.278 [2024-11-18 10:45:25.003341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:16:59.278 [2024-11-18 10:45:25.003349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:25.003431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:25.003442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.278 [2024-11-18 10:45:25.003451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:59.278 [2024-11-18 10:45:25.003460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:25.003482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:59.278 [2024-11-18 10:45:25.003507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:59.278 [2024-11-18 10:45:25.003544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:59.278 [2024-11-18 10:45:25.003561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:59.278 [2024-11-18 10:45:25.003667] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.278 [2024-11-18 10:45:25.003678] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.278 [2024-11-18 10:45:25.003690] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:59.278 [2024-11-18 10:45:25.003700] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.278 [2024-11-18 10:45:25.003714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.278 [2024-11-18 10:45:25.003722] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.278 [2024-11-18 10:45:25.003731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.278 [2024-11-18 10:45:25.003738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.278 [2024-11-18 10:45:25.003746] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.278 [2024-11-18 10:45:25.003753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:25.003761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.278 [2024-11-18 10:45:25.003770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:59.278 [2024-11-18 10:45:25.003777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:25.003866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.278 [2024-11-18 10:45:25.003874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.278 [2024-11-18 10:45:25.003885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:59.278 [2024-11-18 10:45:25.003892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.278 [2024-11-18 10:45:25.003991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.278 [2024-11-18 10:45:25.004003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.278 [2024-11-18 10:45:25.004011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.278 [2024-11-18 10:45:25.004034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.278 [2024-11-18 10:45:25.004058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.278 [2024-11-18 10:45:25.004072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.278 [2024-11-18 10:45:25.004078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.278 [2024-11-18 10:45:25.004085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.278 [2024-11-18 10:45:25.004101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.278 [2024-11-18 10:45:25.004109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:59.278 [2024-11-18 10:45:25.004116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.278 [2024-11-18 10:45:25.004131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.278 [2024-11-18 10:45:25.004152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.278 [2024-11-18 10:45:25.004172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.278 [2024-11-18 10:45:25.004192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.278 [2024-11-18 10:45:25.004244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.278 [2024-11-18 10:45:25.004258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.278 [2024-11-18 10:45:25.004265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.278 [2024-11-18 10:45:25.004279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.278 [2024-11-18 10:45:25.004287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:59.278 [2024-11-18 10:45:25.004293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.278 [2024-11-18 10:45:25.004301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.278 [2024-11-18 10:45:25.004307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:59.278 [2024-11-18 10:45:25.004314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.278 [2024-11-18 10:45:25.004327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:59.278 [2024-11-18 10:45:25.004337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.278 [2024-11-18 10:45:25.004345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.278 [2024-11-18 10:45:25.004354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.279 [2024-11-18 10:45:25.004364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.279 [2024-11-18 10:45:25.004374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.279 [2024-11-18 10:45:25.004383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.279 [2024-11-18 10:45:25.004391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.279 [2024-11-18 10:45:25.004425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.279 [2024-11-18 10:45:25.004432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.279 [2024-11-18 10:45:25.004439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.279 [2024-11-18 10:45:25.004447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.279 [2024-11-18 10:45:25.004456] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.279 [2024-11-18 10:45:25.004466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.279 [2024-11-18 10:45:25.004493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:59.279 [2024-11-18 10:45:25.004501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.279 [2024-11-18 10:45:25.004508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:59.279 [2024-11-18 10:45:25.004516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:59.279 [2024-11-18 10:45:25.004524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:59.279 [2024-11-18 10:45:25.004531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:59.279 [2024-11-18 10:45:25.004539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:59.279 [2024-11-18 10:45:25.004547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:59.279 [2024-11-18 10:45:25.004554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:59.279 [2024-11-18 10:45:25.004592] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.279 [2024-11-18 10:45:25.004601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.279 [2024-11-18 10:45:25.004617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.279 [2024-11-18 10:45:25.004626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.279 [2024-11-18 10:45:25.004634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.279 [2024-11-18 10:45:25.004642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.004651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.279 [2024-11-18 10:45:25.004663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:16:59.279 [2024-11-18 10:45:25.004671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.037084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.037140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.279 [2024-11-18 10:45:25.037152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.359 ms 00:16:59.279 [2024-11-18 10:45:25.037161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.037322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.037353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.279 [2024-11-18 10:45:25.037363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:59.279 [2024-11-18 10:45:25.037371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.085183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.085254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.279 [2024-11-18 10:45:25.085268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.787 ms 00:16:59.279 [2024-11-18 10:45:25.085281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.085403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.085415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.279 [2024-11-18 10:45:25.085425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:59.279 [2024-11-18 10:45:25.085434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.085965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.086002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.279 [2024-11-18 10:45:25.086013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:16:59.279 [2024-11-18 10:45:25.086028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.086187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.086234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.279 [2024-11-18 10:45:25.086245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:59.279 [2024-11-18 10:45:25.086253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.102670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.102720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.279 [2024-11-18 10:45:25.102731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.392 ms 00:16:59.279 [2024-11-18 10:45:25.102740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.117166] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:59.279 [2024-11-18 10:45:25.117247] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:59.279 [2024-11-18 10:45:25.117262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.117271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:59.279 [2024-11-18 10:45:25.117281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.399 ms 00:16:59.279 [2024-11-18 10:45:25.117288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.279 [2024-11-18 10:45:25.145456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.279 [2024-11-18 10:45:25.145519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:59.279 [2024-11-18 10:45:25.145532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.064 ms 00:16:59.279 [2024-11-18 10:45:25.145540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.158745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.158796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:59.578 [2024-11-18 10:45:25.158810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:16:59.578 [2024-11-18 10:45:25.158818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.172038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.172087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:59.578 [2024-11-18 10:45:25.172100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.124 ms 00:16:59.578 [2024-11-18 10:45:25.172107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.172831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.172868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.578 [2024-11-18 10:45:25.172879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:16:59.578 [2024-11-18 10:45:25.172888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.240093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.240430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.578 [2024-11-18 10:45:25.240462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.175 ms 00:16:59.578 [2024-11-18 10:45:25.240472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.252109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.578 [2024-11-18 10:45:25.271757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.271812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.578 [2024-11-18 10:45:25.271828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.119 ms 00:16:59.578 [2024-11-18 10:45:25.271836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.271940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.271952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.578 [2024-11-18 10:45:25.271962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:59.578 [2024-11-18 10:45:25.271971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.272030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.272040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.578 [2024-11-18 10:45:25.272048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:59.578 [2024-11-18 10:45:25.272057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.272086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.272097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.578 [2024-11-18 10:45:25.272106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:59.578 [2024-11-18 10:45:25.272114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.272152] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.578 [2024-11-18 10:45:25.272164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.272172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.578 [2024-11-18 10:45:25.272181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:59.578 [2024-11-18 10:45:25.272189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.298411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.298618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.578 [2024-11-18 10:45:25.298643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.168 ms 00:16:59.578 [2024-11-18 10:45:25.298652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.298792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.578 [2024-11-18 10:45:25.298805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.578 [2024-11-18 10:45:25.298815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:59.578 [2024-11-18 10:45:25.298824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.578 [2024-11-18 10:45:25.299975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.578 [2024-11-18 10:45:25.303689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.097 ms, result 0 00:16:59.578 [2024-11-18 10:45:25.304923] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.578 [2024-11-18 10:45:25.318870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.861  [2024-11-18T10:45:25.745Z] Copying: 4096/4096 [kB] (average 9822 kBps)[2024-11-18 10:45:25.739643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.123 [2024-11-18 10:45:25.749391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.123 [2024-11-18 10:45:25.749440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.123 [2024-11-18 10:45:25.749455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.123 [2024-11-18 10:45:25.749472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.749497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:00.124 [2024-11-18 10:45:25.752420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.752632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.124 [2024-11-18 10:45:25.752655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:17:00.124 [2024-11-18 10:45:25.752664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.755849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.756028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.124 [2024-11-18 10:45:25.756049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:17:00.124 [2024-11-18 10:45:25.756058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.760436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.760486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.124 [2024-11-18 10:45:25.760497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:17:00.124 [2024-11-18 10:45:25.760505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.767420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.767464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.124 [2024-11-18 10:45:25.767475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:17:00.124 [2024-11-18 10:45:25.767483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.793573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.793626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.124 [2024-11-18 10:45:25.793639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:17:00.124 [2024-11-18 10:45:25.793646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.809717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.809776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.124 [2024-11-18 10:45:25.809792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.986 ms 00:17:00.124 [2024-11-18 10:45:25.809801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.809958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.809969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.124 [2024-11-18 10:45:25.809979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:00.124 [2024-11-18 10:45:25.809986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.836927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.836973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:00.124 [2024-11-18 10:45:25.836985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.913 ms 00:17:00.124 [2024-11-18 10:45:25.836991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.862738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.862787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:00.124 [2024-11-18 10:45:25.862799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.681 ms 00:17:00.124 [2024-11-18 10:45:25.862805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.888079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.888125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.124 [2024-11-18 10:45:25.888137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.222 ms 00:17:00.124 [2024-11-18 10:45:25.888145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.913704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.124 [2024-11-18 10:45:25.913754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.124 [2024-11-18 10:45:25.913766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.435 ms 00:17:00.124 [2024-11-18 10:45:25.913772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.124 [2024-11-18 10:45:25.913838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.124 [2024-11-18 10:45:25.913853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.913993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.124 [2024-11-18 10:45:25.914249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.125 [2024-11-18 10:45:25.914651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.125 [2024-11-18 10:45:25.914661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:17:00.125 [2024-11-18 10:45:25.914669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.125 [2024-11-18 10:45:25.914677] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.125 [2024-11-18 10:45:25.914684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.125 [2024-11-18 10:45:25.914692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.125 [2024-11-18 10:45:25.914700] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.125 [2024-11-18 10:45:25.914708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.125 [2024-11-18 10:45:25.914716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.125 [2024-11-18 10:45:25.914722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.125 [2024-11-18 10:45:25.914729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.125 [2024-11-18 10:45:25.914736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.125 [2024-11-18 10:45:25.914747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.125 [2024-11-18 10:45:25.914756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:17:00.125 [2024-11-18 10:45:25.914763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.928762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.125 [2024-11-18 10:45:25.928811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.125 [2024-11-18 10:45:25.928823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:17:00.125 [2024-11-18 10:45:25.928831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.929271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.125 [2024-11-18 10:45:25.929288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.125 [2024-11-18 10:45:25.929298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:17:00.125 [2024-11-18 10:45:25.929306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.968976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.125 [2024-11-18 10:45:25.969028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.125 [2024-11-18 10:45:25.969040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.125 [2024-11-18 10:45:25.969048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.969139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.125 [2024-11-18 10:45:25.969149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.125 [2024-11-18 10:45:25.969158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.125 [2024-11-18 10:45:25.969166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.969249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.125 [2024-11-18 10:45:25.969260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.125 [2024-11-18 10:45:25.969268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.125 [2024-11-18 10:45:25.969276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.125 [2024-11-18 10:45:25.969295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.125 [2024-11-18 10:45:25.969308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.125 [2024-11-18 10:45:25.969317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.125 [2024-11-18 10:45:25.969325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.056330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.056388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.387 [2024-11-18 10:45:26.056413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.056422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.127717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.127948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.387 [2024-11-18 10:45:26.127971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.127980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.387 [2024-11-18 10:45:26.128067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.387 [2024-11-18 10:45:26.128135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.387 [2024-11-18 10:45:26.128300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.387 [2024-11-18 10:45:26.128363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.387 [2024-11-18 10:45:26.128453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.387 [2024-11-18 10:45:26.128519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.387 [2024-11-18 10:45:26.128531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.387 [2024-11-18 10:45:26.128539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.387 [2024-11-18 10:45:26.128696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.292 ms, result 0 00:17:01.330 00:17:01.330 00:17:01.330 10:45:26 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74068 00:17:01.330 10:45:26 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74068 00:17:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.330 10:45:26 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74068 ']' 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.330 10:45:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:01.330 [2024-11-18 10:45:26.990375] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:01.330 [2024-11-18 10:45:26.992751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:17:01.330 [2024-11-18 10:45:27.159670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.591 [2024-11-18 10:45:27.280951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.163 10:45:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.163 10:45:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:02.163 10:45:27 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:02.425 [2024-11-18 10:45:28.173271] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:02.425 [2024-11-18 10:45:28.173350] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:02.687 [2024-11-18 10:45:28.352037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.352310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:02.687 [2024-11-18 10:45:28.352343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.687 [2024-11-18 10:45:28.352353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.355357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.355547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.687 [2024-11-18 10:45:28.355571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.970 ms 00:17:02.687 [2024-11-18 10:45:28.355580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.355811] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:02.687 [2024-11-18 10:45:28.356615] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:02.687 [2024-11-18 10:45:28.356653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.356662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.687 [2024-11-18 10:45:28.356675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:17:02.687 [2024-11-18 10:45:28.356682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.358529] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:02.687 [2024-11-18 10:45:28.372694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.372756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:02.687 [2024-11-18 10:45:28.372772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.174 ms 00:17:02.687 [2024-11-18 10:45:28.372783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.372905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.372920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:02.687 [2024-11-18 10:45:28.372930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:02.687 [2024-11-18 10:45:28.372939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.381475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.381525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.687 [2024-11-18 10:45:28.381536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.481 ms 00:17:02.687 [2024-11-18 10:45:28.381546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.381669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.381682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.687 [2024-11-18 10:45:28.381691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:02.687 [2024-11-18 10:45:28.381705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.381734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.381744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:02.687 [2024-11-18 10:45:28.381752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:02.687 [2024-11-18 10:45:28.381761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.381785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:02.687 [2024-11-18 10:45:28.386121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.386162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.687 [2024-11-18 10:45:28.386175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:17:02.687 [2024-11-18 10:45:28.386183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.386283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.386294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:02.687 [2024-11-18 10:45:28.386306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:02.687 [2024-11-18 10:45:28.386317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.386342] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:02.687 [2024-11-18 10:45:28.386363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:02.687 [2024-11-18 10:45:28.386408] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:02.687 [2024-11-18 10:45:28.386425] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:02.687 [2024-11-18 10:45:28.386535] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:02.687 [2024-11-18 10:45:28.386546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:02.687 [2024-11-18 10:45:28.386577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:02.687 [2024-11-18 10:45:28.386588] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:02.687 [2024-11-18 10:45:28.386599] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:02.687 [2024-11-18 10:45:28.386609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:02.687 [2024-11-18 10:45:28.386619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:02.687 [2024-11-18 10:45:28.386627] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:02.687 [2024-11-18 10:45:28.386639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:02.687 [2024-11-18 10:45:28.386647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.386657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:02.687 [2024-11-18 10:45:28.386664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:17:02.687 [2024-11-18 10:45:28.386676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.386765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.687 [2024-11-18 10:45:28.386783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:02.687 [2024-11-18 10:45:28.386791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:02.687 [2024-11-18 10:45:28.386800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.687 [2024-11-18 10:45:28.386908] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:02.687 [2024-11-18 10:45:28.386921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:02.687 [2024-11-18 10:45:28.386930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.687 [2024-11-18 10:45:28.386940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.687 [2024-11-18 10:45:28.386948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:02.687 [2024-11-18 10:45:28.386956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:02.687 [2024-11-18 10:45:28.386963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:02.687 [2024-11-18 10:45:28.386976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:02.687 [2024-11-18 10:45:28.386983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:02.687 [2024-11-18 10:45:28.386992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.687 [2024-11-18 10:45:28.386999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:02.687 [2024-11-18 10:45:28.387007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:02.687 [2024-11-18 10:45:28.387015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.687 [2024-11-18 10:45:28.387024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:02.687 [2024-11-18 10:45:28.387030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:02.687 [2024-11-18 10:45:28.387039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.687 [2024-11-18 10:45:28.387046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:02.687 [2024-11-18 10:45:28.387055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:02.687 [2024-11-18 10:45:28.387062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.687 [2024-11-18 10:45:28.387072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:02.688 [2024-11-18 10:45:28.387085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:02.688 [2024-11-18 10:45:28.387114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:02.688 [2024-11-18 10:45:28.387137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:02.688 [2024-11-18 10:45:28.387161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:02.688 [2024-11-18 10:45:28.387185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.688 [2024-11-18 10:45:28.387202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:02.688 [2024-11-18 10:45:28.387225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:02.688 [2024-11-18 10:45:28.387232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.688 [2024-11-18 10:45:28.387241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:02.688 [2024-11-18 10:45:28.387249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:02.688 [2024-11-18 10:45:28.387260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:02.688 [2024-11-18 10:45:28.387276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:02.688 [2024-11-18 10:45:28.387283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:02.688 [2024-11-18 10:45:28.387301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:02.688 [2024-11-18 10:45:28.387311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.688 [2024-11-18 10:45:28.387328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:02.688 [2024-11-18 10:45:28.387335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:02.688 [2024-11-18 10:45:28.387344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:02.688 [2024-11-18 10:45:28.387352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:02.688 [2024-11-18 10:45:28.387360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:02.688 [2024-11-18 10:45:28.387367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:02.688 [2024-11-18 10:45:28.387377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:02.688 [2024-11-18 10:45:28.387387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:02.688 [2024-11-18 10:45:28.387408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:02.688 [2024-11-18 10:45:28.387417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:02.688 [2024-11-18 10:45:28.387425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:02.688 [2024-11-18 10:45:28.387434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:02.688 [2024-11-18 10:45:28.387441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:02.688 [2024-11-18 10:45:28.387450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:02.688 [2024-11-18 10:45:28.387458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:02.688 [2024-11-18 10:45:28.387467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:02.688 [2024-11-18 10:45:28.387475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:02.688 [2024-11-18 10:45:28.387518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:02.688 [2024-11-18 10:45:28.387527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:02.688 [2024-11-18 10:45:28.387547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:02.688 [2024-11-18 10:45:28.387556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:02.688 [2024-11-18 10:45:28.387564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:02.688 [2024-11-18 10:45:28.387573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.387581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:02.688 [2024-11-18 10:45:28.387592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:17:02.688 [2024-11-18 10:45:28.387602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.419926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.419980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.688 [2024-11-18 10:45:28.419994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.262 ms 00:17:02.688 [2024-11-18 10:45:28.420005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.420138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.420149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.688 [2024-11-18 10:45:28.420160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:02.688 [2024-11-18 10:45:28.420168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.455461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.455512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.688 [2024-11-18 10:45:28.455525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.266 ms 00:17:02.688 [2024-11-18 10:45:28.455534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.455627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.455637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.688 [2024-11-18 10:45:28.455648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.688 [2024-11-18 10:45:28.455656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.456180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.456238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.688 [2024-11-18 10:45:28.456251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:17:02.688 [2024-11-18 10:45:28.456260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.456457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.456466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.688 [2024-11-18 10:45:28.456477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:02.688 [2024-11-18 10:45:28.456485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.474644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.474852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.688 [2024-11-18 10:45:28.474877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.125 ms 00:17:02.688 [2024-11-18 10:45:28.474886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.489077] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:02.688 [2024-11-18 10:45:28.489130] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:02.688 [2024-11-18 10:45:28.489150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.489159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:02.688 [2024-11-18 10:45:28.489172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:17:02.688 [2024-11-18 10:45:28.489179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.515461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.515659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:02.688 [2024-11-18 10:45:28.515690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.026 ms 00:17:02.688 [2024-11-18 10:45:28.515699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.536512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.536689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:02.688 [2024-11-18 10:45:28.536719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.427 ms 00:17:02.688 [2024-11-18 10:45:28.536727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.549210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.549265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:02.688 [2024-11-18 10:45:28.549282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:17:02.688 [2024-11-18 10:45:28.549289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.688 [2024-11-18 10:45:28.549942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.688 [2024-11-18 10:45:28.549963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.688 [2024-11-18 10:45:28.549976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:17:02.688 [2024-11-18 10:45:28.549984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.623677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.623749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:02.950 [2024-11-18 10:45:28.623770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.662 ms 00:17:02.950 [2024-11-18 10:45:28.623780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.635413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.950 [2024-11-18 10:45:28.654438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.654500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.950 [2024-11-18 10:45:28.654512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.543 ms 00:17:02.950 [2024-11-18 10:45:28.654523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.654620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.654633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.950 [2024-11-18 10:45:28.654643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:02.950 [2024-11-18 10:45:28.654654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.654711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.654722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.950 [2024-11-18 10:45:28.654730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:02.950 [2024-11-18 10:45:28.654743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.654768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.654778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.950 [2024-11-18 10:45:28.654787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.950 [2024-11-18 10:45:28.654800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.654835] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.950 [2024-11-18 10:45:28.654850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.654861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.950 [2024-11-18 10:45:28.654871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:02.950 [2024-11-18 10:45:28.654879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.680871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.680919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.950 [2024-11-18 10:45:28.680936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.960 ms 00:17:02.950 [2024-11-18 10:45:28.680945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.681075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.950 [2024-11-18 10:45:28.681086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.950 [2024-11-18 10:45:28.681101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:02.950 [2024-11-18 10:45:28.681109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.950 [2024-11-18 10:45:28.683021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.950 [2024-11-18 10:45:28.686499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.647 ms, result 0 00:17:02.950 [2024-11-18 10:45:28.688842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.950 Some configs were skipped because the RPC state that can call them passed over. 00:17:02.950 10:45:28 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:03.228 [2024-11-18 10:45:28.933341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.228 [2024-11-18 10:45:28.933532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:03.228 [2024-11-18 10:45:28.933600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:17:03.228 [2024-11-18 10:45:28.933628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.228 [2024-11-18 10:45:28.933683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.472 ms, result 0 00:17:03.228 true 00:17:03.228 10:45:28 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:03.489 [2024-11-18 10:45:29.149301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.489 [2024-11-18 10:45:29.149357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:03.489 [2024-11-18 10:45:29.149374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:03.489 [2024-11-18 10:45:29.149383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.489 [2024-11-18 10:45:29.149423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.903 ms, result 0 00:17:03.489 true 00:17:03.489 10:45:29 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74068 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74068 ']' 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74068 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74068 00:17:03.489 killing process with pid 74068 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74068' 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74068 00:17:03.489 10:45:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74068 00:17:04.434 [2024-11-18 10:45:29.962633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.962716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.434 [2024-11-18 10:45:29.962732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.434 [2024-11-18 10:45:29.962743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.962769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:04.434 [2024-11-18 10:45:29.965826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.966017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.434 [2024-11-18 10:45:29.966046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:17:04.434 [2024-11-18 10:45:29.966055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.966396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.966409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.434 [2024-11-18 10:45:29.966421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:04.434 [2024-11-18 10:45:29.966430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.971030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.971072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.434 [2024-11-18 10:45:29.971087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:17:04.434 [2024-11-18 10:45:29.971095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.978247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.978286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:04.434 [2024-11-18 10:45:29.978299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.105 ms 00:17:04.434 [2024-11-18 10:45:29.978307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.988977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.989145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.434 [2024-11-18 10:45:29.989172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.606 ms 00:17:04.434 [2024-11-18 10:45:29.989187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.998310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.998359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.434 [2024-11-18 10:45:29.998373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.052 ms 00:17:04.434 [2024-11-18 10:45:29.998381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:29.998536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:29.998547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.434 [2024-11-18 10:45:29.998558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:04.434 [2024-11-18 10:45:29.998566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:30.009665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:30.009712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:04.434 [2024-11-18 10:45:30.009728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.074 ms 00:17:04.434 [2024-11-18 10:45:30.009737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:30.020972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:30.021022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:04.434 [2024-11-18 10:45:30.021042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.178 ms 00:17:04.434 [2024-11-18 10:45:30.021050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:30.032164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:30.032229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:04.434 [2024-11-18 10:45:30.032255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.057 ms 00:17:04.434 [2024-11-18 10:45:30.032266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:30.043254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.434 [2024-11-18 10:45:30.043440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:04.434 [2024-11-18 10:45:30.043467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.897 ms 00:17:04.434 [2024-11-18 10:45:30.043475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.434 [2024-11-18 10:45:30.043907] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:04.434 [2024-11-18 10:45:30.043963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.043981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.043990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:04.434 [2024-11-18 10:45:30.044134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.435 [2024-11-18 10:45:30.044872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.436 [2024-11-18 10:45:30.044935] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.436 [2024-11-18 10:45:30.044947] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:17:04.436 [2024-11-18 10:45:30.044972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.436 [2024-11-18 10:45:30.044982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.436 [2024-11-18 10:45:30.044990] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.436 [2024-11-18 10:45:30.045000] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.436 [2024-11-18 10:45:30.045008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.436 [2024-11-18 10:45:30.045018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.436 [2024-11-18 10:45:30.045026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.436 [2024-11-18 10:45:30.045034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.436 [2024-11-18 10:45:30.045040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.436 [2024-11-18 10:45:30.045051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.436 [2024-11-18 10:45:30.045060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.436 [2024-11-18 10:45:30.045072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:17:04.436 [2024-11-18 10:45:30.045082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.059188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.436 [2024-11-18 10:45:30.059249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.436 [2024-11-18 10:45:30.059268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.055 ms 00:17:04.436 [2024-11-18 10:45:30.059276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.059704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.436 [2024-11-18 10:45:30.059723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.436 [2024-11-18 10:45:30.059739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:04.436 [2024-11-18 10:45:30.059746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.108994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.109063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.436 [2024-11-18 10:45:30.109079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.109089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.109252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.109264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.436 [2024-11-18 10:45:30.109280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.109288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.109347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.109357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.436 [2024-11-18 10:45:30.109370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.109378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.109398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.109409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.436 [2024-11-18 10:45:30.109424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.109438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.195785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.195858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.436 [2024-11-18 10:45:30.195876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.195885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.265903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.265967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.436 [2024-11-18 10:45:30.265982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.265995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.436 [2024-11-18 10:45:30.266112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.436 [2024-11-18 10:45:30.266175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.436 [2024-11-18 10:45:30.266373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.436 [2024-11-18 10:45:30.266441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.436 [2024-11-18 10:45:30.266523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.436 [2024-11-18 10:45:30.266596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.436 [2024-11-18 10:45:30.266607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.436 [2024-11-18 10:45:30.266616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.436 [2024-11-18 10:45:30.266774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 304.111 ms, result 0 00:17:05.379 10:45:30 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:05.379 [2024-11-18 10:45:31.050826] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:05.379 [2024-11-18 10:45:31.050965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74127 ] 00:17:05.379 [2024-11-18 10:45:31.214756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.639 [2024-11-18 10:45:31.332903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.901 [2024-11-18 10:45:31.623159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.901 [2024-11-18 10:45:31.623257] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.163 [2024-11-18 10:45:31.785763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.785826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.163 [2024-11-18 10:45:31.785843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:06.163 [2024-11-18 10:45:31.785851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.788924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.788976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.163 [2024-11-18 10:45:31.788987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:17:06.163 [2024-11-18 10:45:31.788996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.789122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.163 [2024-11-18 10:45:31.790020] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.163 [2024-11-18 10:45:31.790068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.790077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.163 [2024-11-18 10:45:31.790087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:17:06.163 [2024-11-18 10:45:31.790095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.791798] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:06.163 [2024-11-18 10:45:31.805930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.805983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:06.163 [2024-11-18 10:45:31.805998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.134 ms 00:17:06.163 [2024-11-18 10:45:31.806006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.806117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.806129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:06.163 [2024-11-18 10:45:31.806139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:06.163 [2024-11-18 10:45:31.806146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.814115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.814159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.163 [2024-11-18 10:45:31.814170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.924 ms 00:17:06.163 [2024-11-18 10:45:31.814178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.163 [2024-11-18 10:45:31.814311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.163 [2024-11-18 10:45:31.814323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.164 [2024-11-18 10:45:31.814331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:06.164 [2024-11-18 10:45:31.814340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.814367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.164 [2024-11-18 10:45:31.814379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.164 [2024-11-18 10:45:31.814387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:06.164 [2024-11-18 10:45:31.814395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.814416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.164 [2024-11-18 10:45:31.818494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.164 [2024-11-18 10:45:31.818531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.164 [2024-11-18 10:45:31.818541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:17:06.164 [2024-11-18 10:45:31.818548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.818621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.164 [2024-11-18 10:45:31.818631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.164 [2024-11-18 10:45:31.818640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:06.164 [2024-11-18 10:45:31.818648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.818667] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:06.164 [2024-11-18 10:45:31.818691] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:06.164 [2024-11-18 10:45:31.818729] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:06.164 [2024-11-18 10:45:31.818745] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:06.164 [2024-11-18 10:45:31.818849] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:06.164 [2024-11-18 10:45:31.818860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.164 [2024-11-18 10:45:31.818871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:06.164 [2024-11-18 10:45:31.818882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.164 [2024-11-18 10:45:31.818894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.164 [2024-11-18 10:45:31.818902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.164 [2024-11-18 10:45:31.818910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.164 [2024-11-18 10:45:31.818918] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:06.164 [2024-11-18 10:45:31.818927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:06.164 [2024-11-18 10:45:31.818935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.164 [2024-11-18 10:45:31.818943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.164 [2024-11-18 10:45:31.818951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:06.164 [2024-11-18 10:45:31.818958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.819046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.164 [2024-11-18 10:45:31.819055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.164 [2024-11-18 10:45:31.819066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:06.164 [2024-11-18 10:45:31.819073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.164 [2024-11-18 10:45:31.819173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.164 [2024-11-18 10:45:31.819185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.164 [2024-11-18 10:45:31.819194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.164 [2024-11-18 10:45:31.819244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.164 [2024-11-18 10:45:31.819266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.164 [2024-11-18 10:45:31.819279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.164 [2024-11-18 10:45:31.819288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.164 [2024-11-18 10:45:31.819295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.164 [2024-11-18 10:45:31.819310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.164 [2024-11-18 10:45:31.819317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:06.164 [2024-11-18 10:45:31.819324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.164 [2024-11-18 10:45:31.819339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.164 [2024-11-18 10:45:31.819359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.164 [2024-11-18 10:45:31.819385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.164 [2024-11-18 10:45:31.819416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.164 [2024-11-18 10:45:31.819451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.164 [2024-11-18 10:45:31.819483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.164 [2024-11-18 10:45:31.819504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.164 [2024-11-18 10:45:31.819514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:06.164 [2024-11-18 10:45:31.819524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.164 [2024-11-18 10:45:31.819536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:06.164 [2024-11-18 10:45:31.819548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:06.164 [2024-11-18 10:45:31.819559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:06.164 [2024-11-18 10:45:31.819583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:06.164 [2024-11-18 10:45:31.819594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.164 [2024-11-18 10:45:31.819622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.164 [2024-11-18 10:45:31.819635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.164 [2024-11-18 10:45:31.819666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.164 [2024-11-18 10:45:31.819693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.164 [2024-11-18 10:45:31.819705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.164 [2024-11-18 10:45:31.819716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.164 [2024-11-18 10:45:31.819728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.164 [2024-11-18 10:45:31.819746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.164 [2024-11-18 10:45:31.819760] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.164 [2024-11-18 10:45:31.819781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.164 [2024-11-18 10:45:31.819796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.164 [2024-11-18 10:45:31.819809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:06.164 [2024-11-18 10:45:31.819822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.164 [2024-11-18 10:45:31.819839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:06.164 [2024-11-18 10:45:31.819852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:06.164 [2024-11-18 10:45:31.819864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:06.164 [2024-11-18 10:45:31.819871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:06.164 [2024-11-18 10:45:31.819879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:06.164 [2024-11-18 10:45:31.819886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:06.165 [2024-11-18 10:45:31.819894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:06.165 [2024-11-18 10:45:31.819933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.165 [2024-11-18 10:45:31.819941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.165 [2024-11-18 10:45:31.819957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.165 [2024-11-18 10:45:31.819964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.165 [2024-11-18 10:45:31.819971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.165 [2024-11-18 10:45:31.819981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.819990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.165 [2024-11-18 10:45:31.820003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:17:06.165 [2024-11-18 10:45:31.820011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.851628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.851798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.165 [2024-11-18 10:45:31.851861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.554 ms 00:17:06.165 [2024-11-18 10:45:31.851886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.852037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.852071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:06.165 [2024-11-18 10:45:31.852093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:06.165 [2024-11-18 10:45:31.852168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.899679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.899872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.165 [2024-11-18 10:45:31.900317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.446 ms 00:17:06.165 [2024-11-18 10:45:31.900381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.900581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.900615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.165 [2024-11-18 10:45:31.900639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:06.165 [2024-11-18 10:45:31.900657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.901167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.901257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.165 [2024-11-18 10:45:31.901345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:17:06.165 [2024-11-18 10:45:31.901375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.901542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.901601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.165 [2024-11-18 10:45:31.901625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:06.165 [2024-11-18 10:45:31.901644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.917650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.917792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.165 [2024-11-18 10:45:31.917810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.971 ms 00:17:06.165 [2024-11-18 10:45:31.917818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.932181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:06.165 [2024-11-18 10:45:31.932242] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.165 [2024-11-18 10:45:31.932256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.932265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.165 [2024-11-18 10:45:31.932274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:17:06.165 [2024-11-18 10:45:31.932280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.958102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.958157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.165 [2024-11-18 10:45:31.958169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.708 ms 00:17:06.165 [2024-11-18 10:45:31.958177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.970928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.970972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.165 [2024-11-18 10:45:31.970984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:17:06.165 [2024-11-18 10:45:31.970992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.983322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.983376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.165 [2024-11-18 10:45:31.983388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:17:06.165 [2024-11-18 10:45:31.983395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.165 [2024-11-18 10:45:31.984040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.165 [2024-11-18 10:45:31.984066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.165 [2024-11-18 10:45:31.984076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:06.165 [2024-11-18 10:45:31.984084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.048435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.048500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:06.426 [2024-11-18 10:45:32.048516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.325 ms 00:17:06.426 [2024-11-18 10:45:32.048525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.059385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:06.426 [2024-11-18 10:45:32.079451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.079508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:06.426 [2024-11-18 10:45:32.079524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.830 ms 00:17:06.426 [2024-11-18 10:45:32.079533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.079646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.079659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:06.426 [2024-11-18 10:45:32.079670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:06.426 [2024-11-18 10:45:32.079678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.079739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.079749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:06.426 [2024-11-18 10:45:32.079758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:06.426 [2024-11-18 10:45:32.079767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.079795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.079806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:06.426 [2024-11-18 10:45:32.079815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:06.426 [2024-11-18 10:45:32.079823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.079860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:06.426 [2024-11-18 10:45:32.079871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.079880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:06.426 [2024-11-18 10:45:32.079891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:06.426 [2024-11-18 10:45:32.079899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.106519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.106570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:06.426 [2024-11-18 10:45:32.106586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.599 ms 00:17:06.426 [2024-11-18 10:45:32.106595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.106732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.426 [2024-11-18 10:45:32.106745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:06.426 [2024-11-18 10:45:32.106755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:06.426 [2024-11-18 10:45:32.106763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.426 [2024-11-18 10:45:32.107901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.426 [2024-11-18 10:45:32.111312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.765 ms, result 0 00:17:06.426 [2024-11-18 10:45:32.112631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.426 [2024-11-18 10:45:32.125956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.369  [2024-11-18T10:45:34.197Z] Copying: 25/256 [MB] (25 MBps) [2024-11-18T10:45:35.582Z] Copying: 44/256 [MB] (19 MBps) [2024-11-18T10:45:36.525Z] Copying: 59/256 [MB] (14 MBps) [2024-11-18T10:45:37.469Z] Copying: 73/256 [MB] (14 MBps) [2024-11-18T10:45:38.413Z] Copying: 92/256 [MB] (19 MBps) [2024-11-18T10:45:39.357Z] Copying: 111/256 [MB] (19 MBps) [2024-11-18T10:45:40.300Z] Copying: 132/256 [MB] (20 MBps) [2024-11-18T10:45:41.245Z] Copying: 149/256 [MB] (17 MBps) [2024-11-18T10:45:42.189Z] Copying: 164/256 [MB] (14 MBps) [2024-11-18T10:45:43.576Z] Copying: 181/256 [MB] (16 MBps) [2024-11-18T10:45:44.520Z] Copying: 198/256 [MB] (16 MBps) [2024-11-18T10:45:45.464Z] Copying: 227/256 [MB] (28 MBps) [2024-11-18T10:45:46.035Z] Copying: 243/256 [MB] (16 MBps) [2024-11-18T10:45:46.609Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-18 10:45:46.313570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.725 [2024-11-18 10:45:46.324143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.725 [2024-11-18 10:45:46.324197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.725 [2024-11-18 10:45:46.324237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.725 [2024-11-18 10:45:46.324255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.725 [2024-11-18 10:45:46.324283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.725 [2024-11-18 10:45:46.328087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.725 [2024-11-18 10:45:46.328127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.725 [2024-11-18 10:45:46.328141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:17:20.725 [2024-11-18 10:45:46.328150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.725 [2024-11-18 10:45:46.328466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.725 [2024-11-18 10:45:46.328479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.725 [2024-11-18 10:45:46.328489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:20.725 [2024-11-18 10:45:46.328497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.725 [2024-11-18 10:45:46.332182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.725 [2024-11-18 10:45:46.332235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.725 [2024-11-18 10:45:46.332245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:17:20.726 [2024-11-18 10:45:46.332254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.340020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.340062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.726 [2024-11-18 10:45:46.340075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.745 ms 00:17:20.726 [2024-11-18 10:45:46.340084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.367607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.367654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.726 [2024-11-18 10:45:46.367667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.443 ms 00:17:20.726 [2024-11-18 10:45:46.367676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.384027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.384273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.726 [2024-11-18 10:45:46.384298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.281 ms 00:17:20.726 [2024-11-18 10:45:46.384314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.384487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.384500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.726 [2024-11-18 10:45:46.384510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:20.726 [2024-11-18 10:45:46.384518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.409882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.409924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.726 [2024-11-18 10:45:46.409937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.334 ms 00:17:20.726 [2024-11-18 10:45:46.409946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.435954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.436152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.726 [2024-11-18 10:45:46.436172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.943 ms 00:17:20.726 [2024-11-18 10:45:46.436180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.460615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.460661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.726 [2024-11-18 10:45:46.460673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:17:20.726 [2024-11-18 10:45:46.460681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.484905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.726 [2024-11-18 10:45:46.484951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.726 [2024-11-18 10:45:46.484963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.142 ms 00:17:20.726 [2024-11-18 10:45:46.484971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.726 [2024-11-18 10:45:46.485017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.726 [2024-11-18 10:45:46.485033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.726 [2024-11-18 10:45:46.485535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.727 [2024-11-18 10:45:46.485890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.727 [2024-11-18 10:45:46.485899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b69bf4f7-41ea-4a56-b153-3b2fc53c436b 00:17:20.727 [2024-11-18 10:45:46.485907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.727 [2024-11-18 10:45:46.485915] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.727 [2024-11-18 10:45:46.485923] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.727 [2024-11-18 10:45:46.485932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.727 [2024-11-18 10:45:46.485941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.727 [2024-11-18 10:45:46.485950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.727 [2024-11-18 10:45:46.485958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.727 [2024-11-18 10:45:46.485965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.727 [2024-11-18 10:45:46.485971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.727 [2024-11-18 10:45:46.485979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.727 [2024-11-18 10:45:46.485990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.727 [2024-11-18 10:45:46.485999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:17:20.727 [2024-11-18 10:45:46.486007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.499307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.727 [2024-11-18 10:45:46.499346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.727 [2024-11-18 10:45:46.499359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.265 ms 00:17:20.727 [2024-11-18 10:45:46.499367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.499772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.727 [2024-11-18 10:45:46.499782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.727 [2024-11-18 10:45:46.499791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:17:20.727 [2024-11-18 10:45:46.499798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.538384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.727 [2024-11-18 10:45:46.538584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.727 [2024-11-18 10:45:46.538604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.727 [2024-11-18 10:45:46.538613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.538727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.727 [2024-11-18 10:45:46.538737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.727 [2024-11-18 10:45:46.538747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.727 [2024-11-18 10:45:46.538755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.538811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.727 [2024-11-18 10:45:46.538821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.727 [2024-11-18 10:45:46.538829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.727 [2024-11-18 10:45:46.538837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.727 [2024-11-18 10:45:46.538856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.727 [2024-11-18 10:45:46.538868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.727 [2024-11-18 10:45:46.538876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.727 [2024-11-18 10:45:46.538884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.622456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.622513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.988 [2024-11-18 10:45:46.622527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.622536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.988 [2024-11-18 10:45:46.691261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.988 [2024-11-18 10:45:46.691363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.988 [2024-11-18 10:45:46.691425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.988 [2024-11-18 10:45:46.691550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.988 [2024-11-18 10:45:46.691611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.988 [2024-11-18 10:45:46.691683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.988 [2024-11-18 10:45:46.691746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.988 [2024-11-18 10:45:46.691758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.988 [2024-11-18 10:45:46.691766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.988 [2024-11-18 10:45:46.691922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.778 ms, result 0 00:17:21.559 00:17:21.559 00:17:21.819 10:45:47 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.391 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.391 10:45:48 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74068 00:17:22.391 10:45:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74068 ']' 00:17:22.391 Process with pid 74068 is not found 00:17:22.391 10:45:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74068 00:17:22.391 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74068) - No such process 00:17:22.391 10:45:48 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74068 is not found' 00:17:22.391 00:17:22.391 real 1m13.752s 00:17:22.391 user 1m29.452s 00:17:22.391 sys 0m15.820s 00:17:22.391 10:45:48 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.391 10:45:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:22.391 ************************************ 00:17:22.391 END TEST ftl_trim 00:17:22.391 ************************************ 00:17:22.391 10:45:48 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:22.391 10:45:48 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:22.391 10:45:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.391 10:45:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:22.391 ************************************ 00:17:22.391 START TEST ftl_restore 00:17:22.391 ************************************ 00:17:22.391 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:22.391 * Looking for test storage... 00:17:22.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.391 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.391 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.391 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.653 10:45:48 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.653 --rc genhtml_branch_coverage=1 00:17:22.653 --rc genhtml_function_coverage=1 00:17:22.653 --rc genhtml_legend=1 00:17:22.653 --rc geninfo_all_blocks=1 00:17:22.653 --rc geninfo_unexecuted_blocks=1 00:17:22.653 00:17:22.653 ' 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.653 --rc genhtml_branch_coverage=1 00:17:22.653 --rc genhtml_function_coverage=1 00:17:22.653 --rc genhtml_legend=1 00:17:22.653 --rc geninfo_all_blocks=1 00:17:22.653 --rc geninfo_unexecuted_blocks=1 00:17:22.653 00:17:22.653 ' 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.653 --rc genhtml_branch_coverage=1 00:17:22.653 --rc genhtml_function_coverage=1 00:17:22.653 --rc genhtml_legend=1 00:17:22.653 --rc geninfo_all_blocks=1 00:17:22.653 --rc geninfo_unexecuted_blocks=1 00:17:22.653 00:17:22.653 ' 00:17:22.653 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.653 --rc genhtml_branch_coverage=1 00:17:22.653 --rc genhtml_function_coverage=1 00:17:22.653 --rc genhtml_legend=1 00:17:22.653 --rc geninfo_all_blocks=1 00:17:22.653 --rc geninfo_unexecuted_blocks=1 00:17:22.653 00:17:22.653 ' 00:17:22.653 10:45:48 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zu86mZU7dc 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74373 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74373 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74373 ']' 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.654 10:45:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:22.654 10:45:48 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.654 [2024-11-18 10:45:48.461682] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:22.654 [2024-11-18 10:45:48.461831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74373 ] 00:17:22.914 [2024-11-18 10:45:48.624848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.915 [2024-11-18 10:45:48.748627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:23.881 10:45:49 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:23.881 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:24.143 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:24.143 { 00:17:24.143 "name": "nvme0n1", 00:17:24.143 "aliases": [ 00:17:24.143 "4e03192c-dcda-4966-b9f8-2b50a98ee6f4" 00:17:24.143 ], 00:17:24.143 "product_name": "NVMe disk", 00:17:24.143 "block_size": 4096, 00:17:24.143 "num_blocks": 1310720, 00:17:24.143 "uuid": "4e03192c-dcda-4966-b9f8-2b50a98ee6f4", 00:17:24.143 "numa_id": -1, 00:17:24.143 "assigned_rate_limits": { 00:17:24.143 "rw_ios_per_sec": 0, 00:17:24.143 "rw_mbytes_per_sec": 0, 00:17:24.143 "r_mbytes_per_sec": 0, 00:17:24.143 "w_mbytes_per_sec": 0 00:17:24.143 }, 00:17:24.143 "claimed": true, 00:17:24.143 "claim_type": "read_many_write_one", 00:17:24.143 "zoned": false, 00:17:24.143 "supported_io_types": { 00:17:24.143 "read": true, 00:17:24.143 "write": true, 00:17:24.143 "unmap": true, 00:17:24.143 "flush": true, 00:17:24.143 "reset": true, 00:17:24.143 "nvme_admin": true, 00:17:24.143 "nvme_io": true, 00:17:24.143 "nvme_io_md": false, 00:17:24.143 "write_zeroes": true, 00:17:24.143 "zcopy": false, 00:17:24.143 "get_zone_info": false, 00:17:24.143 "zone_management": false, 00:17:24.143 "zone_append": false, 00:17:24.143 "compare": true, 00:17:24.143 "compare_and_write": false, 00:17:24.143 "abort": true, 00:17:24.143 "seek_hole": false, 00:17:24.143 "seek_data": false, 00:17:24.143 "copy": true, 00:17:24.143 "nvme_iov_md": false 00:17:24.143 }, 00:17:24.143 "driver_specific": { 00:17:24.143 "nvme": [ 00:17:24.143 { 00:17:24.143 "pci_address": "0000:00:11.0", 00:17:24.143 "trid": { 00:17:24.143 "trtype": "PCIe", 00:17:24.143 "traddr": "0000:00:11.0" 00:17:24.143 }, 00:17:24.143 "ctrlr_data": { 00:17:24.143 "cntlid": 0, 00:17:24.143 "vendor_id": "0x1b36", 00:17:24.143 "model_number": "QEMU NVMe Ctrl", 00:17:24.143 "serial_number": "12341", 00:17:24.143 "firmware_revision": "8.0.0", 00:17:24.143 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:24.143 "oacs": { 00:17:24.143 "security": 0, 00:17:24.143 "format": 1, 00:17:24.143 "firmware": 0, 00:17:24.143 "ns_manage": 1 00:17:24.143 }, 00:17:24.143 "multi_ctrlr": false, 00:17:24.143 "ana_reporting": false 00:17:24.143 }, 00:17:24.143 "vs": { 00:17:24.143 "nvme_version": "1.4" 00:17:24.143 }, 00:17:24.143 "ns_data": { 00:17:24.143 "id": 1, 00:17:24.143 "can_share": false 00:17:24.143 } 00:17:24.143 } 00:17:24.143 ], 00:17:24.143 "mp_policy": "active_passive" 00:17:24.143 } 00:17:24.143 } 00:17:24.143 ]' 00:17:24.143 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:24.143 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:24.143 10:45:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:24.143 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:24.143 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:24.143 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:24.143 10:45:50 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:24.143 10:45:50 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:24.143 10:45:50 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:24.403 10:45:50 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:24.403 10:45:50 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:24.403 10:45:50 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=822a6030-028f-4116-bf27-f8c281d4b2f2 00:17:24.403 10:45:50 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:24.403 10:45:50 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 822a6030-028f-4116-bf27-f8c281d4b2f2 00:17:24.665 10:45:50 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:24.926 10:45:50 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=419efb89-10f1-4097-bc52-2c622c8678c9 00:17:24.926 10:45:50 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 419efb89-10f1-4097-bc52-2c622c8678c9 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:25.188 10:45:50 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.188 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.188 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.188 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:25.188 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:25.188 10:45:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.448 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.448 { 00:17:25.448 "name": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:25.448 "aliases": [ 00:17:25.448 "lvs/nvme0n1p0" 00:17:25.448 ], 00:17:25.448 "product_name": "Logical Volume", 00:17:25.448 "block_size": 4096, 00:17:25.448 "num_blocks": 26476544, 00:17:25.448 "uuid": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:25.448 "assigned_rate_limits": { 00:17:25.448 "rw_ios_per_sec": 0, 00:17:25.448 "rw_mbytes_per_sec": 0, 00:17:25.448 "r_mbytes_per_sec": 0, 00:17:25.448 "w_mbytes_per_sec": 0 00:17:25.448 }, 00:17:25.448 "claimed": false, 00:17:25.448 "zoned": false, 00:17:25.448 "supported_io_types": { 00:17:25.448 "read": true, 00:17:25.448 "write": true, 00:17:25.448 "unmap": true, 00:17:25.448 "flush": false, 00:17:25.448 "reset": true, 00:17:25.448 "nvme_admin": false, 00:17:25.448 "nvme_io": false, 00:17:25.448 "nvme_io_md": false, 00:17:25.448 "write_zeroes": true, 00:17:25.448 "zcopy": false, 00:17:25.448 "get_zone_info": false, 00:17:25.448 "zone_management": false, 00:17:25.448 "zone_append": false, 00:17:25.448 "compare": false, 00:17:25.448 "compare_and_write": false, 00:17:25.448 "abort": false, 00:17:25.448 "seek_hole": true, 00:17:25.448 "seek_data": true, 00:17:25.449 "copy": false, 00:17:25.449 "nvme_iov_md": false 00:17:25.449 }, 00:17:25.449 "driver_specific": { 00:17:25.449 "lvol": { 00:17:25.449 "lvol_store_uuid": "419efb89-10f1-4097-bc52-2c622c8678c9", 00:17:25.449 "base_bdev": "nvme0n1", 00:17:25.449 "thin_provision": true, 00:17:25.449 "num_allocated_clusters": 0, 00:17:25.449 "snapshot": false, 00:17:25.449 "clone": false, 00:17:25.449 "esnap_clone": false 00:17:25.449 } 00:17:25.449 } 00:17:25.449 } 00:17:25.449 ]' 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.449 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.449 10:45:51 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:25.449 10:45:51 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:25.449 10:45:51 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:25.709 10:45:51 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.709 10:45:51 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.709 10:45:51 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.709 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.709 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.709 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:25.710 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:25.710 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.971 { 00:17:25.971 "name": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:25.971 "aliases": [ 00:17:25.971 "lvs/nvme0n1p0" 00:17:25.971 ], 00:17:25.971 "product_name": "Logical Volume", 00:17:25.971 "block_size": 4096, 00:17:25.971 "num_blocks": 26476544, 00:17:25.971 "uuid": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:25.971 "assigned_rate_limits": { 00:17:25.971 "rw_ios_per_sec": 0, 00:17:25.971 "rw_mbytes_per_sec": 0, 00:17:25.971 "r_mbytes_per_sec": 0, 00:17:25.971 "w_mbytes_per_sec": 0 00:17:25.971 }, 00:17:25.971 "claimed": false, 00:17:25.971 "zoned": false, 00:17:25.971 "supported_io_types": { 00:17:25.971 "read": true, 00:17:25.971 "write": true, 00:17:25.971 "unmap": true, 00:17:25.971 "flush": false, 00:17:25.971 "reset": true, 00:17:25.971 "nvme_admin": false, 00:17:25.971 "nvme_io": false, 00:17:25.971 "nvme_io_md": false, 00:17:25.971 "write_zeroes": true, 00:17:25.971 "zcopy": false, 00:17:25.971 "get_zone_info": false, 00:17:25.971 "zone_management": false, 00:17:25.971 "zone_append": false, 00:17:25.971 "compare": false, 00:17:25.971 "compare_and_write": false, 00:17:25.971 "abort": false, 00:17:25.971 "seek_hole": true, 00:17:25.971 "seek_data": true, 00:17:25.971 "copy": false, 00:17:25.971 "nvme_iov_md": false 00:17:25.971 }, 00:17:25.971 "driver_specific": { 00:17:25.971 "lvol": { 00:17:25.971 "lvol_store_uuid": "419efb89-10f1-4097-bc52-2c622c8678c9", 00:17:25.971 "base_bdev": "nvme0n1", 00:17:25.971 "thin_provision": true, 00:17:25.971 "num_allocated_clusters": 0, 00:17:25.971 "snapshot": false, 00:17:25.971 "clone": false, 00:17:25.971 "esnap_clone": false 00:17:25.971 } 00:17:25.971 } 00:17:25.971 } 00:17:25.971 ]' 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.971 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.971 10:45:51 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.971 10:45:51 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:26.232 10:45:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:26.232 10:45:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:26.232 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:26.232 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:26.232 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:26.232 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:26.232 10:45:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b315ec4-93f4-4e86-815a-f94030eeb3b7 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:26.492 { 00:17:26.492 "name": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:26.492 "aliases": [ 00:17:26.492 "lvs/nvme0n1p0" 00:17:26.492 ], 00:17:26.492 "product_name": "Logical Volume", 00:17:26.492 "block_size": 4096, 00:17:26.492 "num_blocks": 26476544, 00:17:26.492 "uuid": "8b315ec4-93f4-4e86-815a-f94030eeb3b7", 00:17:26.492 "assigned_rate_limits": { 00:17:26.492 "rw_ios_per_sec": 0, 00:17:26.492 "rw_mbytes_per_sec": 0, 00:17:26.492 "r_mbytes_per_sec": 0, 00:17:26.492 "w_mbytes_per_sec": 0 00:17:26.492 }, 00:17:26.492 "claimed": false, 00:17:26.492 "zoned": false, 00:17:26.492 "supported_io_types": { 00:17:26.492 "read": true, 00:17:26.492 "write": true, 00:17:26.492 "unmap": true, 00:17:26.492 "flush": false, 00:17:26.492 "reset": true, 00:17:26.492 "nvme_admin": false, 00:17:26.492 "nvme_io": false, 00:17:26.492 "nvme_io_md": false, 00:17:26.492 "write_zeroes": true, 00:17:26.492 "zcopy": false, 00:17:26.492 "get_zone_info": false, 00:17:26.492 "zone_management": false, 00:17:26.492 "zone_append": false, 00:17:26.492 "compare": false, 00:17:26.492 "compare_and_write": false, 00:17:26.492 "abort": false, 00:17:26.492 "seek_hole": true, 00:17:26.492 "seek_data": true, 00:17:26.492 "copy": false, 00:17:26.492 "nvme_iov_md": false 00:17:26.492 }, 00:17:26.492 "driver_specific": { 00:17:26.492 "lvol": { 00:17:26.492 "lvol_store_uuid": "419efb89-10f1-4097-bc52-2c622c8678c9", 00:17:26.492 "base_bdev": "nvme0n1", 00:17:26.492 "thin_provision": true, 00:17:26.492 "num_allocated_clusters": 0, 00:17:26.492 "snapshot": false, 00:17:26.492 "clone": false, 00:17:26.492 "esnap_clone": false 00:17:26.492 } 00:17:26.492 } 00:17:26.492 } 00:17:26.492 ]' 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:26.492 10:45:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8b315ec4-93f4-4e86-815a-f94030eeb3b7 --l2p_dram_limit 10' 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:26.492 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:26.492 10:45:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8b315ec4-93f4-4e86-815a-f94030eeb3b7 --l2p_dram_limit 10 -c nvc0n1p0 00:17:26.753 [2024-11-18 10:45:52.417159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.417197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.753 [2024-11-18 10:45:52.417223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.753 [2024-11-18 10:45:52.417231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.417289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.417297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.753 [2024-11-18 10:45:52.417305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:26.753 [2024-11-18 10:45:52.417311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.417330] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.753 [2024-11-18 10:45:52.417916] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.753 [2024-11-18 10:45:52.417935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.417941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.753 [2024-11-18 10:45:52.417949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:17:26.753 [2024-11-18 10:45:52.417955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.418009] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f09a5254-e303-4f50-a673-b0726b000e27 00:17:26.753 [2024-11-18 10:45:52.418936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.418960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:26.753 [2024-11-18 10:45:52.418968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:26.753 [2024-11-18 10:45:52.418976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.423578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.423610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.753 [2024-11-18 10:45:52.423619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:17:26.753 [2024-11-18 10:45:52.423626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.423691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.423700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.753 [2024-11-18 10:45:52.423707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:26.753 [2024-11-18 10:45:52.423715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.423754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.423763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.753 [2024-11-18 10:45:52.423770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.753 [2024-11-18 10:45:52.423779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.423795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.753 [2024-11-18 10:45:52.426661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.426686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.753 [2024-11-18 10:45:52.426696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:17:26.753 [2024-11-18 10:45:52.426702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.753 [2024-11-18 10:45:52.426728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.753 [2024-11-18 10:45:52.426734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.754 [2024-11-18 10:45:52.426742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.754 [2024-11-18 10:45:52.426748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.754 [2024-11-18 10:45:52.426761] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:26.754 [2024-11-18 10:45:52.426865] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.754 [2024-11-18 10:45:52.426877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.754 [2024-11-18 10:45:52.426885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.754 [2024-11-18 10:45:52.426894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.754 [2024-11-18 10:45:52.426901] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.754 [2024-11-18 10:45:52.426908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:26.754 [2024-11-18 10:45:52.426914] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.754 [2024-11-18 10:45:52.426923] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.754 [2024-11-18 10:45:52.426928] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.754 [2024-11-18 10:45:52.426935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.754 [2024-11-18 10:45:52.426941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.754 [2024-11-18 10:45:52.426949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:26.754 [2024-11-18 10:45:52.426959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.754 [2024-11-18 10:45:52.427025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.754 [2024-11-18 10:45:52.427031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.754 [2024-11-18 10:45:52.427038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:26.754 [2024-11-18 10:45:52.427044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.754 [2024-11-18 10:45:52.427121] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.754 [2024-11-18 10:45:52.427128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.754 [2024-11-18 10:45:52.427136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.754 [2024-11-18 10:45:52.427154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.754 [2024-11-18 10:45:52.427171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.754 [2024-11-18 10:45:52.427183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.754 [2024-11-18 10:45:52.427188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:26.754 [2024-11-18 10:45:52.427194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.754 [2024-11-18 10:45:52.427199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.754 [2024-11-18 10:45:52.427215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:26.754 [2024-11-18 10:45:52.427221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.754 [2024-11-18 10:45:52.427234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.754 [2024-11-18 10:45:52.427254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.754 [2024-11-18 10:45:52.427271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.754 [2024-11-18 10:45:52.427289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.754 [2024-11-18 10:45:52.427305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.754 [2024-11-18 10:45:52.427324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.754 [2024-11-18 10:45:52.427335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.754 [2024-11-18 10:45:52.427340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:26.754 [2024-11-18 10:45:52.427346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.754 [2024-11-18 10:45:52.427351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.754 [2024-11-18 10:45:52.427357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:26.754 [2024-11-18 10:45:52.427362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.754 [2024-11-18 10:45:52.427373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:26.754 [2024-11-18 10:45:52.427379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427384] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.754 [2024-11-18 10:45:52.427391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.754 [2024-11-18 10:45:52.427397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.754 [2024-11-18 10:45:52.427411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.754 [2024-11-18 10:45:52.427419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.754 [2024-11-18 10:45:52.427425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.754 [2024-11-18 10:45:52.427431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.754 [2024-11-18 10:45:52.427436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.754 [2024-11-18 10:45:52.427443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.754 [2024-11-18 10:45:52.427451] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.754 [2024-11-18 10:45:52.427459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.754 [2024-11-18 10:45:52.427467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:26.754 [2024-11-18 10:45:52.427474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:26.754 [2024-11-18 10:45:52.427479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:26.754 [2024-11-18 10:45:52.427486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:26.754 [2024-11-18 10:45:52.427491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:26.754 [2024-11-18 10:45:52.427497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:26.754 [2024-11-18 10:45:52.427503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:26.755 [2024-11-18 10:45:52.427509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:26.755 [2024-11-18 10:45:52.427515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:26.755 [2024-11-18 10:45:52.427523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:26.755 [2024-11-18 10:45:52.427553] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.755 [2024-11-18 10:45:52.427560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.755 [2024-11-18 10:45:52.427573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.755 [2024-11-18 10:45:52.427578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.755 [2024-11-18 10:45:52.427584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.755 [2024-11-18 10:45:52.427590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.755 [2024-11-18 10:45:52.427597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.755 [2024-11-18 10:45:52.427603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:26.755 [2024-11-18 10:45:52.427609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.755 [2024-11-18 10:45:52.427639] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:26.755 [2024-11-18 10:45:52.427649] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:30.962 [2024-11-18 10:45:56.250345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.250697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:30.962 [2024-11-18 10:45:56.250726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3822.689 ms 00:17:30.962 [2024-11-18 10:45:56.250738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.283876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.283948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.962 [2024-11-18 10:45:56.283963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.884 ms 00:17:30.962 [2024-11-18 10:45:56.283974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.284131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.284146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.962 [2024-11-18 10:45:56.284155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:30.962 [2024-11-18 10:45:56.284170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.320513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.320574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.962 [2024-11-18 10:45:56.320587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.273 ms 00:17:30.962 [2024-11-18 10:45:56.320598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.320637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.320654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.962 [2024-11-18 10:45:56.320663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.962 [2024-11-18 10:45:56.320674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.321351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.321440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.962 [2024-11-18 10:45:56.321455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:17:30.962 [2024-11-18 10:45:56.321465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.321596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.321608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.962 [2024-11-18 10:45:56.321620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:30.962 [2024-11-18 10:45:56.321633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.339645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.339698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.962 [2024-11-18 10:45:56.339710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:17:30.962 [2024-11-18 10:45:56.339721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.353426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:30.962 [2024-11-18 10:45:56.357427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.357475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.962 [2024-11-18 10:45:56.357490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.608 ms 00:17:30.962 [2024-11-18 10:45:56.357498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.467876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.962 [2024-11-18 10:45:56.467947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:30.962 [2024-11-18 10:45:56.467970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.335 ms 00:17:30.962 [2024-11-18 10:45:56.467980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.962 [2024-11-18 10:45:56.468230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.468247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.963 [2024-11-18 10:45:56.468263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:30.963 [2024-11-18 10:45:56.468271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.495415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.495474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:30.963 [2024-11-18 10:45:56.495492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.078 ms 00:17:30.963 [2024-11-18 10:45:56.495501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.522881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.523108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:30.963 [2024-11-18 10:45:56.523140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.314 ms 00:17:30.963 [2024-11-18 10:45:56.523148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.523786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.523809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.963 [2024-11-18 10:45:56.523822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:17:30.963 [2024-11-18 10:45:56.523831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.609237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.609450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:30.963 [2024-11-18 10:45:56.609486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.316 ms 00:17:30.963 [2024-11-18 10:45:56.609496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.638712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.638788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:30.963 [2024-11-18 10:45:56.638806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.112 ms 00:17:30.963 [2024-11-18 10:45:56.638815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.665699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.665905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:30.963 [2024-11-18 10:45:56.665934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.819 ms 00:17:30.963 [2024-11-18 10:45:56.665942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.693049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.693263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.963 [2024-11-18 10:45:56.693291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.982 ms 00:17:30.963 [2024-11-18 10:45:56.693300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.693354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.693364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.963 [2024-11-18 10:45:56.693379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:30.963 [2024-11-18 10:45:56.693387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.693499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.963 [2024-11-18 10:45:56.693510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.963 [2024-11-18 10:45:56.693524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:30.963 [2024-11-18 10:45:56.693532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.963 [2024-11-18 10:45:56.694709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4277.006 ms, result 0 00:17:30.963 { 00:17:30.963 "name": "ftl0", 00:17:30.963 "uuid": "f09a5254-e303-4f50-a673-b0726b000e27" 00:17:30.963 } 00:17:30.963 10:45:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:30.963 10:45:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:31.263 10:45:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:31.263 10:45:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:31.263 [2024-11-18 10:45:57.126034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.263 [2024-11-18 10:45:57.126112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.263 [2024-11-18 10:45:57.126128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.263 [2024-11-18 10:45:57.126147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.263 [2024-11-18 10:45:57.126174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.538 [2024-11-18 10:45:57.129412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.538 [2024-11-18 10:45:57.129461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.538 [2024-11-18 10:45:57.129475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:17:31.538 [2024-11-18 10:45:57.129483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.538 [2024-11-18 10:45:57.129801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.538 [2024-11-18 10:45:57.129822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.538 [2024-11-18 10:45:57.129839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:31.538 [2024-11-18 10:45:57.129847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.538 [2024-11-18 10:45:57.133105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.538 [2024-11-18 10:45:57.133310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.538 [2024-11-18 10:45:57.133334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:17:31.538 [2024-11-18 10:45:57.133345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.538 [2024-11-18 10:45:57.139581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.538 [2024-11-18 10:45:57.139752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.538 [2024-11-18 10:45:57.139781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:17:31.538 [2024-11-18 10:45:57.139790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.538 [2024-11-18 10:45:57.166940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.538 [2024-11-18 10:45:57.167141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.539 [2024-11-18 10:45:57.167170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.057 ms 00:17:31.539 [2024-11-18 10:45:57.167178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.184899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.184960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:31.539 [2024-11-18 10:45:57.184979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.561 ms 00:17:31.539 [2024-11-18 10:45:57.184987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.185172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.185185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:31.539 [2024-11-18 10:45:57.185198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:31.539 [2024-11-18 10:45:57.185233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.211844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.211897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:31.539 [2024-11-18 10:45:57.211913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.588 ms 00:17:31.539 [2024-11-18 10:45:57.211921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.238106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.238158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:31.539 [2024-11-18 10:45:57.238174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.126 ms 00:17:31.539 [2024-11-18 10:45:57.238181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.264020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.264222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:31.539 [2024-11-18 10:45:57.264250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:17:31.539 [2024-11-18 10:45:57.264258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.289909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.539 [2024-11-18 10:45:57.289961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:31.539 [2024-11-18 10:45:57.289976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.537 ms 00:17:31.539 [2024-11-18 10:45:57.289984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.539 [2024-11-18 10:45:57.290038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:31.539 [2024-11-18 10:45:57.290054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:31.539 [2024-11-18 10:45:57.290860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.290994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:31.540 [2024-11-18 10:45:57.291183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:31.540 [2024-11-18 10:45:57.291196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f09a5254-e303-4f50-a673-b0726b000e27 00:17:31.540 [2024-11-18 10:45:57.291216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:31.540 [2024-11-18 10:45:57.291229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:31.540 [2024-11-18 10:45:57.291243] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:31.540 [2024-11-18 10:45:57.291257] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:31.540 [2024-11-18 10:45:57.291264] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:31.540 [2024-11-18 10:45:57.291274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:31.540 [2024-11-18 10:45:57.291282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:31.540 [2024-11-18 10:45:57.291290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:31.540 [2024-11-18 10:45:57.291296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:31.540 [2024-11-18 10:45:57.291306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.540 [2024-11-18 10:45:57.291313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:31.540 [2024-11-18 10:45:57.291325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:17:31.540 [2024-11-18 10:45:57.291333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.305405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.540 [2024-11-18 10:45:57.305452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:31.540 [2024-11-18 10:45:57.305467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.017 ms 00:17:31.540 [2024-11-18 10:45:57.305475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.305892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.540 [2024-11-18 10:45:57.305909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:31.540 [2024-11-18 10:45:57.305921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:17:31.540 [2024-11-18 10:45:57.305932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.353222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.540 [2024-11-18 10:45:57.353271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.540 [2024-11-18 10:45:57.353286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.540 [2024-11-18 10:45:57.353294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.353371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.540 [2024-11-18 10:45:57.353380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.540 [2024-11-18 10:45:57.353391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.540 [2024-11-18 10:45:57.353402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.353503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.540 [2024-11-18 10:45:57.353514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.540 [2024-11-18 10:45:57.353525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.540 [2024-11-18 10:45:57.353533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.540 [2024-11-18 10:45:57.353556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.540 [2024-11-18 10:45:57.353564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.540 [2024-11-18 10:45:57.353575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.540 [2024-11-18 10:45:57.353583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.440508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.440571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.807 [2024-11-18 10:45:57.440588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.440598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.511867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.807 [2024-11-18 10:45:57.512126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.807 [2024-11-18 10:45:57.512310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.807 [2024-11-18 10:45:57.512429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.807 [2024-11-18 10:45:57.512578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.807 [2024-11-18 10:45:57.512646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.807 [2024-11-18 10:45:57.512721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.807 [2024-11-18 10:45:57.512794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.807 [2024-11-18 10:45:57.512805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.807 [2024-11-18 10:45:57.512813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.807 [2024-11-18 10:45:57.512964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.888 ms, result 0 00:17:31.807 true 00:17:31.807 10:45:57 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74373 00:17:31.807 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74373 ']' 00:17:31.807 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74373 00:17:31.807 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:17:31.807 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.807 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74373 00:17:31.807 killing process with pid 74373 00:17:31.808 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.808 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.808 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74373' 00:17:31.808 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74373 00:17:31.808 10:45:57 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74373 00:17:38.395 10:46:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:41.698 262144+0 records in 00:17:41.698 262144+0 records out 00:17:41.698 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.7134 s, 289 MB/s 00:17:41.698 10:46:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:43.614 10:46:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.614 [2024-11-18 10:46:09.247305] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:43.614 [2024-11-18 10:46:09.247419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74606 ] 00:17:43.614 [2024-11-18 10:46:09.407466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.875 [2024-11-18 10:46:09.510855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.137 [2024-11-18 10:46:09.801571] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.137 [2024-11-18 10:46:09.801849] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.137 [2024-11-18 10:46:09.962722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.962782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.137 [2024-11-18 10:46:09.962803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.137 [2024-11-18 10:46:09.962812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.962869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.962880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.137 [2024-11-18 10:46:09.962892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:44.137 [2024-11-18 10:46:09.962900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.962920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.137 [2024-11-18 10:46:09.963663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.137 [2024-11-18 10:46:09.963690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.963698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.137 [2024-11-18 10:46:09.963707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:17:44.137 [2024-11-18 10:46:09.963715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.965421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.137 [2024-11-18 10:46:09.979716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.979766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.137 [2024-11-18 10:46:09.979779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:17:44.137 [2024-11-18 10:46:09.979787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.979865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.979876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.137 [2024-11-18 10:46:09.979885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:44.137 [2024-11-18 10:46:09.979892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.987822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.987866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.137 [2024-11-18 10:46:09.987878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.852 ms 00:17:44.137 [2024-11-18 10:46:09.987887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.987967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.987976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.137 [2024-11-18 10:46:09.987985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:44.137 [2024-11-18 10:46:09.987994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.988036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.988046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.137 [2024-11-18 10:46:09.988054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.137 [2024-11-18 10:46:09.988062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.988085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.137 [2024-11-18 10:46:09.992215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.137 [2024-11-18 10:46:09.992256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.137 [2024-11-18 10:46:09.992266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:17:44.137 [2024-11-18 10:46:09.992277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.137 [2024-11-18 10:46:09.992312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.138 [2024-11-18 10:46:09.992321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.138 [2024-11-18 10:46:09.992329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:44.138 [2024-11-18 10:46:09.992337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.138 [2024-11-18 10:46:09.992389] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.138 [2024-11-18 10:46:09.992423] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.138 [2024-11-18 10:46:09.992462] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.138 [2024-11-18 10:46:09.992481] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.138 [2024-11-18 10:46:09.992587] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.138 [2024-11-18 10:46:09.992599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.138 [2024-11-18 10:46:09.992612] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.138 [2024-11-18 10:46:09.992623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.138 [2024-11-18 10:46:09.992632] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.138 [2024-11-18 10:46:09.992642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.138 [2024-11-18 10:46:09.992650] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.138 [2024-11-18 10:46:09.992658] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.138 [2024-11-18 10:46:09.992666] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.138 [2024-11-18 10:46:09.992677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.138 [2024-11-18 10:46:09.992685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.138 [2024-11-18 10:46:09.992693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:44.138 [2024-11-18 10:46:09.992702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.138 [2024-11-18 10:46:09.992784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.138 [2024-11-18 10:46:09.992794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.138 [2024-11-18 10:46:09.992802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.138 [2024-11-18 10:46:09.992808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.138 [2024-11-18 10:46:09.992912] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.138 [2024-11-18 10:46:09.992932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.138 [2024-11-18 10:46:09.992941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.138 [2024-11-18 10:46:09.992949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.992958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.138 [2024-11-18 10:46:09.992965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.992972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.138 [2024-11-18 10:46:09.992979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.138 [2024-11-18 10:46:09.992986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.138 [2024-11-18 10:46:09.992995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.138 [2024-11-18 10:46:09.993002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.138 [2024-11-18 10:46:09.993011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.138 [2024-11-18 10:46:09.993018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.138 [2024-11-18 10:46:09.993025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.138 [2024-11-18 10:46:09.993032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.138 [2024-11-18 10:46:09.993046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.138 [2024-11-18 10:46:09.993060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.138 [2024-11-18 10:46:09.993082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.138 [2024-11-18 10:46:09.993102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.138 [2024-11-18 10:46:09.993122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.138 [2024-11-18 10:46:09.993143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.138 [2024-11-18 10:46:09.993164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.138 [2024-11-18 10:46:09.993177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.138 [2024-11-18 10:46:09.993184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.138 [2024-11-18 10:46:09.993190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.138 [2024-11-18 10:46:09.993199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.138 [2024-11-18 10:46:09.993239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.138 [2024-11-18 10:46:09.993247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.138 [2024-11-18 10:46:09.993262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.138 [2024-11-18 10:46:09.993269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.138 [2024-11-18 10:46:09.993286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.138 [2024-11-18 10:46:09.993294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.138 [2024-11-18 10:46:09.993309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.138 [2024-11-18 10:46:09.993317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.138 [2024-11-18 10:46:09.993324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.138 [2024-11-18 10:46:09.993331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.138 [2024-11-18 10:46:09.993338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.138 [2024-11-18 10:46:09.993345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.138 [2024-11-18 10:46:09.993353] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.138 [2024-11-18 10:46:09.993362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.138 [2024-11-18 10:46:09.993378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.138 [2024-11-18 10:46:09.993386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.138 [2024-11-18 10:46:09.993393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.138 [2024-11-18 10:46:09.993401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.138 [2024-11-18 10:46:09.993408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.138 [2024-11-18 10:46:09.993415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.138 [2024-11-18 10:46:09.993422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.138 [2024-11-18 10:46:09.993429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.138 [2024-11-18 10:46:09.993436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.138 [2024-11-18 10:46:09.993473] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.138 [2024-11-18 10:46:09.993484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.138 [2024-11-18 10:46:09.993500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.138 [2024-11-18 10:46:09.993508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.138 [2024-11-18 10:46:09.993516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.138 [2024-11-18 10:46:09.993524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.138 [2024-11-18 10:46:09.993532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.138 [2024-11-18 10:46:09.993540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:17:44.138 [2024-11-18 10:46:09.993547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.026640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.026863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.400 [2024-11-18 10:46:10.026884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.048 ms 00:17:44.400 [2024-11-18 10:46:10.026893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.026995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.027005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.400 [2024-11-18 10:46:10.027014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:44.400 [2024-11-18 10:46:10.027023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.074861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.075090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.400 [2024-11-18 10:46:10.075115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.772 ms 00:17:44.400 [2024-11-18 10:46:10.075124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.075186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.075197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.400 [2024-11-18 10:46:10.075234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.400 [2024-11-18 10:46:10.075249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.075841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.075884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.400 [2024-11-18 10:46:10.075896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:17:44.400 [2024-11-18 10:46:10.075904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.076068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.076079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.400 [2024-11-18 10:46:10.076088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:44.400 [2024-11-18 10:46:10.076103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.092121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.092169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.400 [2024-11-18 10:46:10.092185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.998 ms 00:17:44.400 [2024-11-18 10:46:10.092193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.106786] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:44.400 [2024-11-18 10:46:10.106853] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.400 [2024-11-18 10:46:10.106868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.106877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.400 [2024-11-18 10:46:10.106887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.536 ms 00:17:44.400 [2024-11-18 10:46:10.106896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.133017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.133066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.400 [2024-11-18 10:46:10.133086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.065 ms 00:17:44.400 [2024-11-18 10:46:10.133094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.145888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.145941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.400 [2024-11-18 10:46:10.145953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.730 ms 00:17:44.400 [2024-11-18 10:46:10.145961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.158479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.158522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.400 [2024-11-18 10:46:10.158534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:17:44.400 [2024-11-18 10:46:10.158541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.159228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.159253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.400 [2024-11-18 10:46:10.159264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:17:44.400 [2024-11-18 10:46:10.159272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.224221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.224463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.400 [2024-11-18 10:46:10.224490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.909 ms 00:17:44.400 [2024-11-18 10:46:10.224507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.235951] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:44.400 [2024-11-18 10:46:10.239354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.239400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.400 [2024-11-18 10:46:10.239414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.475 ms 00:17:44.400 [2024-11-18 10:46:10.239424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.239530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.239544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.400 [2024-11-18 10:46:10.239554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:44.400 [2024-11-18 10:46:10.239562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.239639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.239650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.400 [2024-11-18 10:46:10.239660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:44.400 [2024-11-18 10:46:10.239668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.239689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.239698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.400 [2024-11-18 10:46:10.239707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.400 [2024-11-18 10:46:10.239715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.400 [2024-11-18 10:46:10.239750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.400 [2024-11-18 10:46:10.239762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.400 [2024-11-18 10:46:10.239773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.401 [2024-11-18 10:46:10.239783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.401 [2024-11-18 10:46:10.239791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.401 [2024-11-18 10:46:10.265408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.401 [2024-11-18 10:46:10.265462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.401 [2024-11-18 10:46:10.265476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.598 ms 00:17:44.401 [2024-11-18 10:46:10.265485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.401 [2024-11-18 10:46:10.265581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.401 [2024-11-18 10:46:10.265593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.401 [2024-11-18 10:46:10.265603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:44.401 [2024-11-18 10:46:10.265611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.401 [2024-11-18 10:46:10.267008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.789 ms, result 0 00:17:45.789  [2024-11-18T10:46:12.617Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T10:46:13.562Z] Copying: 43/1024 [MB] (32 MBps) [2024-11-18T10:46:14.507Z] Copying: 65/1024 [MB] (22 MBps) [2024-11-18T10:46:15.452Z] Copying: 79/1024 [MB] (13 MBps) [2024-11-18T10:46:16.397Z] Copying: 94/1024 [MB] (14 MBps) [2024-11-18T10:46:17.343Z] Copying: 113/1024 [MB] (19 MBps) [2024-11-18T10:46:18.288Z] Copying: 126/1024 [MB] (13 MBps) [2024-11-18T10:46:19.676Z] Copying: 142/1024 [MB] (15 MBps) [2024-11-18T10:46:20.623Z] Copying: 161/1024 [MB] (18 MBps) [2024-11-18T10:46:21.569Z] Copying: 179/1024 [MB] (17 MBps) [2024-11-18T10:46:22.513Z] Copying: 198/1024 [MB] (19 MBps) [2024-11-18T10:46:23.457Z] Copying: 216/1024 [MB] (18 MBps) [2024-11-18T10:46:24.401Z] Copying: 236/1024 [MB] (19 MBps) [2024-11-18T10:46:25.344Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-18T10:46:26.289Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-18T10:46:27.677Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-18T10:46:28.621Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-18T10:46:29.565Z] Copying: 304/1024 [MB] (26 MBps) [2024-11-18T10:46:30.508Z] Copying: 318/1024 [MB] (13 MBps) [2024-11-18T10:46:31.452Z] Copying: 332/1024 [MB] (13 MBps) [2024-11-18T10:46:32.397Z] Copying: 344/1024 [MB] (11 MBps) [2024-11-18T10:46:33.408Z] Copying: 357/1024 [MB] (12 MBps) [2024-11-18T10:46:34.353Z] Copying: 367/1024 [MB] (10 MBps) [2024-11-18T10:46:35.297Z] Copying: 380/1024 [MB] (13 MBps) [2024-11-18T10:46:36.682Z] Copying: 409/1024 [MB] (28 MBps) [2024-11-18T10:46:37.626Z] Copying: 432/1024 [MB] (23 MBps) [2024-11-18T10:46:38.569Z] Copying: 448/1024 [MB] (15 MBps) [2024-11-18T10:46:39.512Z] Copying: 467/1024 [MB] (18 MBps) [2024-11-18T10:46:40.457Z] Copying: 481/1024 [MB] (14 MBps) [2024-11-18T10:46:41.400Z] Copying: 495/1024 [MB] (13 MBps) [2024-11-18T10:46:42.344Z] Copying: 545/1024 [MB] (50 MBps) [2024-11-18T10:46:43.288Z] Copying: 567/1024 [MB] (21 MBps) [2024-11-18T10:46:44.675Z] Copying: 585/1024 [MB] (18 MBps) [2024-11-18T10:46:45.619Z] Copying: 602/1024 [MB] (16 MBps) [2024-11-18T10:46:46.564Z] Copying: 621/1024 [MB] (19 MBps) [2024-11-18T10:46:47.506Z] Copying: 644/1024 [MB] (22 MBps) [2024-11-18T10:46:48.449Z] Copying: 671/1024 [MB] (26 MBps) [2024-11-18T10:46:49.393Z] Copying: 692/1024 [MB] (21 MBps) [2024-11-18T10:46:50.337Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-18T10:46:51.282Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-18T10:46:52.668Z] Copying: 736/1024 [MB] (12 MBps) [2024-11-18T10:46:53.612Z] Copying: 756/1024 [MB] (19 MBps) [2024-11-18T10:46:54.557Z] Copying: 772/1024 [MB] (15 MBps) [2024-11-18T10:46:55.499Z] Copying: 789/1024 [MB] (16 MBps) [2024-11-18T10:46:56.442Z] Copying: 807/1024 [MB] (17 MBps) [2024-11-18T10:46:57.385Z] Copying: 817/1024 [MB] (10 MBps) [2024-11-18T10:46:58.328Z] Copying: 827/1024 [MB] (10 MBps) [2024-11-18T10:46:59.711Z] Copying: 857376/1048576 [kB] (10100 kBps) [2024-11-18T10:47:00.285Z] Copying: 851/1024 [MB] (14 MBps) [2024-11-18T10:47:01.672Z] Copying: 867/1024 [MB] (15 MBps) [2024-11-18T10:47:02.617Z] Copying: 882/1024 [MB] (14 MBps) [2024-11-18T10:47:03.560Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-18T10:47:04.503Z] Copying: 905/1024 [MB] (13 MBps) [2024-11-18T10:47:05.502Z] Copying: 924/1024 [MB] (19 MBps) [2024-11-18T10:47:06.446Z] Copying: 942/1024 [MB] (17 MBps) [2024-11-18T10:47:07.391Z] Copying: 959/1024 [MB] (16 MBps) [2024-11-18T10:47:08.336Z] Copying: 971/1024 [MB] (12 MBps) [2024-11-18T10:47:09.723Z] Copying: 982/1024 [MB] (10 MBps) [2024-11-18T10:47:10.295Z] Copying: 994/1024 [MB] (12 MBps) [2024-11-18T10:47:11.685Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-18T10:47:11.948Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-18T10:47:11.948Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 10:47:11.688316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.688397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.064 [2024-11-18 10:47:11.688427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.064 [2024-11-18 10:47:11.688437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.688459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.064 [2024-11-18 10:47:11.691529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.691729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.064 [2024-11-18 10:47:11.691753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:18:46.064 [2024-11-18 10:47:11.691762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.694869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.695029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.064 [2024-11-18 10:47:11.695048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.067 ms 00:18:46.064 [2024-11-18 10:47:11.695057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.712725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.712775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.064 [2024-11-18 10:47:11.712788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.647 ms 00:18:46.064 [2024-11-18 10:47:11.712796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.718998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.719049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:46.064 [2024-11-18 10:47:11.719061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.160 ms 00:18:46.064 [2024-11-18 10:47:11.719075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.746040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.746087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:46.064 [2024-11-18 10:47:11.746100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.908 ms 00:18:46.064 [2024-11-18 10:47:11.746108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.762031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.762079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:46.064 [2024-11-18 10:47:11.762092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.873 ms 00:18:46.064 [2024-11-18 10:47:11.762100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.762272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.762286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:46.064 [2024-11-18 10:47:11.762303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:46.064 [2024-11-18 10:47:11.762311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.788367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.788586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:46.064 [2024-11-18 10:47:11.788609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.040 ms 00:18:46.064 [2024-11-18 10:47:11.788618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.814779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.814835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:46.064 [2024-11-18 10:47:11.814864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.827 ms 00:18:46.064 [2024-11-18 10:47:11.814872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.840151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.840199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:46.064 [2024-11-18 10:47:11.840226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:18:46.064 [2024-11-18 10:47:11.840234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.865165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.064 [2024-11-18 10:47:11.865235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:46.064 [2024-11-18 10:47:11.865247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.854 ms 00:18:46.064 [2024-11-18 10:47:11.865255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.064 [2024-11-18 10:47:11.865301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:46.064 [2024-11-18 10:47:11.865317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:46.064 [2024-11-18 10:47:11.865456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.865998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.065 [2024-11-18 10:47:11.866044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.066 [2024-11-18 10:47:11.866100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.066 [2024-11-18 10:47:11.866115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f09a5254-e303-4f50-a673-b0726b000e27 00:18:46.066 [2024-11-18 10:47:11.866124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.066 [2024-11-18 10:47:11.866135] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.066 [2024-11-18 10:47:11.866142] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.066 [2024-11-18 10:47:11.866150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.066 [2024-11-18 10:47:11.866157] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.066 [2024-11-18 10:47:11.866165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.066 [2024-11-18 10:47:11.866173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.066 [2024-11-18 10:47:11.866188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.066 [2024-11-18 10:47:11.866195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.066 [2024-11-18 10:47:11.866214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.066 [2024-11-18 10:47:11.866223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.066 [2024-11-18 10:47:11.866232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:18:46.066 [2024-11-18 10:47:11.866240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.879921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.066 [2024-11-18 10:47:11.879966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.066 [2024-11-18 10:47:11.879979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.647 ms 00:18:46.066 [2024-11-18 10:47:11.879986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.880408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.066 [2024-11-18 10:47:11.880432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.066 [2024-11-18 10:47:11.880441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:18:46.066 [2024-11-18 10:47:11.880449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.917026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.066 [2024-11-18 10:47:11.917077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.066 [2024-11-18 10:47:11.917090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.066 [2024-11-18 10:47:11.917099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.917173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.066 [2024-11-18 10:47:11.917184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.066 [2024-11-18 10:47:11.917193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.066 [2024-11-18 10:47:11.917217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.917291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.066 [2024-11-18 10:47:11.917301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.066 [2024-11-18 10:47:11.917311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.066 [2024-11-18 10:47:11.917320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.066 [2024-11-18 10:47:11.917338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.066 [2024-11-18 10:47:11.917348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.066 [2024-11-18 10:47:11.917357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.066 [2024-11-18 10:47:11.917366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.003281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.003333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.327 [2024-11-18 10:47:12.003347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.003355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.327 [2024-11-18 10:47:12.073260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.327 [2024-11-18 10:47:12.073377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.327 [2024-11-18 10:47:12.073443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.327 [2024-11-18 10:47:12.073571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:46.327 [2024-11-18 10:47:12.073631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.327 [2024-11-18 10:47:12.073712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.327 [2024-11-18 10:47:12.073781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.327 [2024-11-18 10:47:12.073790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.327 [2024-11-18 10:47:12.073799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.327 [2024-11-18 10:47:12.073936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.581 ms, result 0 00:18:47.271 00:18:47.271 00:18:47.271 10:47:13 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:47.533 [2024-11-18 10:47:13.161747] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:47.533 [2024-11-18 10:47:13.161893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75266 ] 00:18:47.533 [2024-11-18 10:47:13.324966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.794 [2024-11-18 10:47:13.444718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.057 [2024-11-18 10:47:13.731965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.057 [2024-11-18 10:47:13.732349] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.057 [2024-11-18 10:47:13.893844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.893906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:48.057 [2024-11-18 10:47:13.893928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:48.057 [2024-11-18 10:47:13.893937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.893994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.894005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.057 [2024-11-18 10:47:13.894018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:48.057 [2024-11-18 10:47:13.894027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.894049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:48.057 [2024-11-18 10:47:13.894773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:48.057 [2024-11-18 10:47:13.894802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.894811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.057 [2024-11-18 10:47:13.894820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:18:48.057 [2024-11-18 10:47:13.894828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.896665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:48.057 [2024-11-18 10:47:13.910904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.910954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:48.057 [2024-11-18 10:47:13.910969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.240 ms 00:18:48.057 [2024-11-18 10:47:13.910977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.911061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.911071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:48.057 [2024-11-18 10:47:13.911081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:48.057 [2024-11-18 10:47:13.911088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.919223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.919262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.057 [2024-11-18 10:47:13.919273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.055 ms 00:18:48.057 [2024-11-18 10:47:13.919281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.919368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.919378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.057 [2024-11-18 10:47:13.919387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:48.057 [2024-11-18 10:47:13.919396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.919440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.919450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:48.057 [2024-11-18 10:47:13.919459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:48.057 [2024-11-18 10:47:13.919467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.919490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.057 [2024-11-18 10:47:13.923573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.923615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.057 [2024-11-18 10:47:13.923626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:18:48.057 [2024-11-18 10:47:13.923637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.923672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.923681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:48.057 [2024-11-18 10:47:13.923690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:48.057 [2024-11-18 10:47:13.923697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.923750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:48.057 [2024-11-18 10:47:13.923773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:48.057 [2024-11-18 10:47:13.923811] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:48.057 [2024-11-18 10:47:13.923832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:48.057 [2024-11-18 10:47:13.923939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:48.057 [2024-11-18 10:47:13.923951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:48.057 [2024-11-18 10:47:13.923962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:48.057 [2024-11-18 10:47:13.923973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:48.057 [2024-11-18 10:47:13.923983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:48.057 [2024-11-18 10:47:13.923991] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:48.057 [2024-11-18 10:47:13.923999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:48.057 [2024-11-18 10:47:13.924007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:48.057 [2024-11-18 10:47:13.924015] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:48.057 [2024-11-18 10:47:13.924026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.924034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:48.057 [2024-11-18 10:47:13.924042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:48.057 [2024-11-18 10:47:13.924052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.924135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.057 [2024-11-18 10:47:13.924144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:48.057 [2024-11-18 10:47:13.924153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:48.057 [2024-11-18 10:47:13.924160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.057 [2024-11-18 10:47:13.924289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:48.057 [2024-11-18 10:47:13.924305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:48.057 [2024-11-18 10:47:13.924314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.057 [2024-11-18 10:47:13.924322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.057 [2024-11-18 10:47:13.924330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:48.057 [2024-11-18 10:47:13.924338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:48.057 [2024-11-18 10:47:13.924345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:48.057 [2024-11-18 10:47:13.924355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:48.057 [2024-11-18 10:47:13.924365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:48.057 [2024-11-18 10:47:13.924371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.058 [2024-11-18 10:47:13.924378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:48.058 [2024-11-18 10:47:13.924384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:48.058 [2024-11-18 10:47:13.924391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.058 [2024-11-18 10:47:13.924399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:48.058 [2024-11-18 10:47:13.924406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:48.058 [2024-11-18 10:47:13.924435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:48.058 [2024-11-18 10:47:13.924449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:48.058 [2024-11-18 10:47:13.924472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:48.058 [2024-11-18 10:47:13.924493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:48.058 [2024-11-18 10:47:13.924514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:48.058 [2024-11-18 10:47:13.924535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:48.058 [2024-11-18 10:47:13.924555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.058 [2024-11-18 10:47:13.924567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:48.058 [2024-11-18 10:47:13.924574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:48.058 [2024-11-18 10:47:13.924581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.058 [2024-11-18 10:47:13.924588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:48.058 [2024-11-18 10:47:13.924594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:48.058 [2024-11-18 10:47:13.924601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:48.058 [2024-11-18 10:47:13.924614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:48.058 [2024-11-18 10:47:13.924623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924631] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:48.058 [2024-11-18 10:47:13.924639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:48.058 [2024-11-18 10:47:13.924648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.058 [2024-11-18 10:47:13.924663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:48.058 [2024-11-18 10:47:13.924670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:48.058 [2024-11-18 10:47:13.924677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:48.058 [2024-11-18 10:47:13.924684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:48.058 [2024-11-18 10:47:13.924691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:48.058 [2024-11-18 10:47:13.924698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:48.058 [2024-11-18 10:47:13.924707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:48.058 [2024-11-18 10:47:13.924716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:48.058 [2024-11-18 10:47:13.924732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:48.058 [2024-11-18 10:47:13.924740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:48.058 [2024-11-18 10:47:13.924747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:48.058 [2024-11-18 10:47:13.924754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:48.058 [2024-11-18 10:47:13.924762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:48.058 [2024-11-18 10:47:13.924769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:48.058 [2024-11-18 10:47:13.924775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:48.058 [2024-11-18 10:47:13.924783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:48.058 [2024-11-18 10:47:13.924790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:48.058 [2024-11-18 10:47:13.924825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:48.058 [2024-11-18 10:47:13.924836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:48.058 [2024-11-18 10:47:13.924852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:48.058 [2024-11-18 10:47:13.924859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:48.058 [2024-11-18 10:47:13.924867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:48.058 [2024-11-18 10:47:13.924875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.058 [2024-11-18 10:47:13.924882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:48.058 [2024-11-18 10:47:13.924890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:18:48.058 [2024-11-18 10:47:13.924899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:13.956867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:13.956919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.320 [2024-11-18 10:47:13.956932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.921 ms 00:18:48.320 [2024-11-18 10:47:13.956940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:13.957036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:13.957045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:48.320 [2024-11-18 10:47:13.957055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:48.320 [2024-11-18 10:47:13.957063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:13.999182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:13.999250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.320 [2024-11-18 10:47:13.999265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.061 ms 00:18:48.320 [2024-11-18 10:47:13.999273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:13.999325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:13.999335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.320 [2024-11-18 10:47:13.999345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.320 [2024-11-18 10:47:13.999356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:13.999950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:13.999974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.320 [2024-11-18 10:47:13.999985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:18:48.320 [2024-11-18 10:47:13.999992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.000150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.000161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.320 [2024-11-18 10:47:14.000171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:18:48.320 [2024-11-18 10:47:14.000185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.015863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.015907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.320 [2024-11-18 10:47:14.015922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:18:48.320 [2024-11-18 10:47:14.015930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.030292] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:48.320 [2024-11-18 10:47:14.030342] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:48.320 [2024-11-18 10:47:14.030356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.030365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:48.320 [2024-11-18 10:47:14.030376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:18:48.320 [2024-11-18 10:47:14.030382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.056184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.056249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:48.320 [2024-11-18 10:47:14.056261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.746 ms 00:18:48.320 [2024-11-18 10:47:14.056269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.068986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.069032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:48.320 [2024-11-18 10:47:14.069045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.661 ms 00:18:48.320 [2024-11-18 10:47:14.069053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.081675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.081721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:48.320 [2024-11-18 10:47:14.081733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.573 ms 00:18:48.320 [2024-11-18 10:47:14.081740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.082419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.082444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.320 [2024-11-18 10:47:14.082454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:18:48.320 [2024-11-18 10:47:14.082466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.148340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.320 [2024-11-18 10:47:14.148611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:48.320 [2024-11-18 10:47:14.148648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.852 ms 00:18:48.320 [2024-11-18 10:47:14.148659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.320 [2024-11-18 10:47:14.160149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:48.321 [2024-11-18 10:47:14.163740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.163787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.321 [2024-11-18 10:47:14.163802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.027 ms 00:18:48.321 [2024-11-18 10:47:14.163812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.163911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.163925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:48.321 [2024-11-18 10:47:14.163935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:48.321 [2024-11-18 10:47:14.163947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.164022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.164034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.321 [2024-11-18 10:47:14.164046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:48.321 [2024-11-18 10:47:14.164055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.164076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.164087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.321 [2024-11-18 10:47:14.164098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.321 [2024-11-18 10:47:14.164108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.164147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:48.321 [2024-11-18 10:47:14.164162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.164172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:48.321 [2024-11-18 10:47:14.164182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:48.321 [2024-11-18 10:47:14.164192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.190188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.190259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.321 [2024-11-18 10:47:14.190274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.940 ms 00:18:48.321 [2024-11-18 10:47:14.190290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.190385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.321 [2024-11-18 10:47:14.190395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.321 [2024-11-18 10:47:14.190405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:48.321 [2024-11-18 10:47:14.190413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.321 [2024-11-18 10:47:14.192154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.742 ms, result 0 00:18:49.705  [2024-11-18T10:47:16.533Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-18T10:47:17.477Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-18T10:47:18.421Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-18T10:47:19.806Z] Copying: 58/1024 [MB] (10 MBps) [2024-11-18T10:47:20.378Z] Copying: 68/1024 [MB] (10 MBps) [2024-11-18T10:47:21.766Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-18T10:47:22.709Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-18T10:47:23.653Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-18T10:47:24.597Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-18T10:47:25.542Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-18T10:47:26.487Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-18T10:47:27.431Z] Copying: 142/1024 [MB] (10 MBps) [2024-11-18T10:47:28.818Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-18T10:47:29.404Z] Copying: 173/1024 [MB] (20 MBps) [2024-11-18T10:47:30.788Z] Copying: 184/1024 [MB] (10 MBps) [2024-11-18T10:47:31.729Z] Copying: 197/1024 [MB] (13 MBps) [2024-11-18T10:47:32.674Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-18T10:47:33.619Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-18T10:47:34.564Z] Copying: 242/1024 [MB] (15 MBps) [2024-11-18T10:47:35.576Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-18T10:47:36.527Z] Copying: 268/1024 [MB] (14 MBps) [2024-11-18T10:47:37.470Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-18T10:47:38.414Z] Copying: 290/1024 [MB] (11 MBps) [2024-11-18T10:47:39.813Z] Copying: 307/1024 [MB] (16 MBps) [2024-11-18T10:47:40.386Z] Copying: 323/1024 [MB] (16 MBps) [2024-11-18T10:47:41.773Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-18T10:47:42.717Z] Copying: 354/1024 [MB] (15 MBps) [2024-11-18T10:47:43.661Z] Copying: 369/1024 [MB] (14 MBps) [2024-11-18T10:47:44.606Z] Copying: 382/1024 [MB] (13 MBps) [2024-11-18T10:47:45.550Z] Copying: 397/1024 [MB] (14 MBps) [2024-11-18T10:47:46.494Z] Copying: 414/1024 [MB] (17 MBps) [2024-11-18T10:47:47.440Z] Copying: 430/1024 [MB] (15 MBps) [2024-11-18T10:47:48.382Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-18T10:47:49.765Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-18T10:47:50.710Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-18T10:47:51.664Z] Copying: 473/1024 [MB] (10 MBps) [2024-11-18T10:47:52.609Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-18T10:47:53.553Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-18T10:47:54.499Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-18T10:47:55.444Z] Copying: 517/1024 [MB] (10 MBps) [2024-11-18T10:47:56.389Z] Copying: 537/1024 [MB] (20 MBps) [2024-11-18T10:47:57.777Z] Copying: 553/1024 [MB] (15 MBps) [2024-11-18T10:47:58.722Z] Copying: 566/1024 [MB] (13 MBps) [2024-11-18T10:47:59.667Z] Copying: 582/1024 [MB] (15 MBps) [2024-11-18T10:48:00.611Z] Copying: 598/1024 [MB] (16 MBps) [2024-11-18T10:48:01.556Z] Copying: 615/1024 [MB] (16 MBps) [2024-11-18T10:48:02.497Z] Copying: 636/1024 [MB] (21 MBps) [2024-11-18T10:48:03.440Z] Copying: 657/1024 [MB] (21 MBps) [2024-11-18T10:48:04.384Z] Copying: 679/1024 [MB] (21 MBps) [2024-11-18T10:48:05.770Z] Copying: 695/1024 [MB] (15 MBps) [2024-11-18T10:48:06.714Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-18T10:48:07.671Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-18T10:48:08.663Z] Copying: 730/1024 [MB] (14 MBps) [2024-11-18T10:48:09.605Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-18T10:48:10.548Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-18T10:48:11.492Z] Copying: 766/1024 [MB] (14 MBps) [2024-11-18T10:48:12.434Z] Copying: 780/1024 [MB] (13 MBps) [2024-11-18T10:48:13.379Z] Copying: 794/1024 [MB] (14 MBps) [2024-11-18T10:48:14.765Z] Copying: 805/1024 [MB] (10 MBps) [2024-11-18T10:48:15.708Z] Copying: 819/1024 [MB] (14 MBps) [2024-11-18T10:48:16.650Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-18T10:48:17.595Z] Copying: 843/1024 [MB] (13 MBps) [2024-11-18T10:48:18.539Z] Copying: 870/1024 [MB] (26 MBps) [2024-11-18T10:48:19.483Z] Copying: 882/1024 [MB] (12 MBps) [2024-11-18T10:48:20.427Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-18T10:48:21.813Z] Copying: 917/1024 [MB] (16 MBps) [2024-11-18T10:48:22.386Z] Copying: 932/1024 [MB] (15 MBps) [2024-11-18T10:48:23.773Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-18T10:48:24.718Z] Copying: 960/1024 [MB] (16 MBps) [2024-11-18T10:48:25.662Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-18T10:48:26.606Z] Copying: 987/1024 [MB] (16 MBps) [2024-11-18T10:48:27.549Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-18T10:48:28.123Z] Copying: 1013/1024 [MB] (15 MBps) [2024-11-18T10:48:28.385Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 10:48:28.332347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.332444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.501 [2024-11-18 10:48:28.332462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.501 [2024-11-18 10:48:28.332472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.501 [2024-11-18 10:48:28.332498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.501 [2024-11-18 10:48:28.335601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.335650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.501 [2024-11-18 10:48:28.335671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:20:02.501 [2024-11-18 10:48:28.335680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.501 [2024-11-18 10:48:28.335962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.336936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.501 [2024-11-18 10:48:28.336997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:02.501 [2024-11-18 10:48:28.337012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.501 [2024-11-18 10:48:28.340960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.341071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.501 [2024-11-18 10:48:28.341130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:20:02.501 [2024-11-18 10:48:28.341158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.501 [2024-11-18 10:48:28.347787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.347955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.501 [2024-11-18 10:48:28.348028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.559 ms 00:20:02.501 [2024-11-18 10:48:28.348052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.501 [2024-11-18 10:48:28.377777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.501 [2024-11-18 10:48:28.377971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.501 [2024-11-18 10:48:28.378429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.630 ms 00:20:02.501 [2024-11-18 10:48:28.378484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.394582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.394773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.763 [2024-11-18 10:48:28.394976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.960 ms 00:20:02.763 [2024-11-18 10:48:28.395020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.395192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.395269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.763 [2024-11-18 10:48:28.395293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:02.763 [2024-11-18 10:48:28.395364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.421621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.421788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.763 [2024-11-18 10:48:28.421845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.222 ms 00:20:02.763 [2024-11-18 10:48:28.421867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.447609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.447785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.763 [2024-11-18 10:48:28.447844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:20:02.763 [2024-11-18 10:48:28.447865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.473297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.473460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.763 [2024-11-18 10:48:28.473517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.223 ms 00:20:02.763 [2024-11-18 10:48:28.473540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.498918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.763 [2024-11-18 10:48:28.499113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.763 [2024-11-18 10:48:28.499182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.930 ms 00:20:02.763 [2024-11-18 10:48:28.499220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.763 [2024-11-18 10:48:28.499272] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.763 [2024-11-18 10:48:28.499303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.763 [2024-11-18 10:48:28.499866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.499939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.499974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.500972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.764 [2024-11-18 10:48:28.501664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.764 [2024-11-18 10:48:28.501678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f09a5254-e303-4f50-a673-b0726b000e27 00:20:02.764 [2024-11-18 10:48:28.501687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.764 [2024-11-18 10:48:28.501695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.764 [2024-11-18 10:48:28.501703] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.765 [2024-11-18 10:48:28.501712] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.765 [2024-11-18 10:48:28.501720] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.765 [2024-11-18 10:48:28.501728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.765 [2024-11-18 10:48:28.501743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.765 [2024-11-18 10:48:28.501750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.765 [2024-11-18 10:48:28.501756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.765 [2024-11-18 10:48:28.501765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.765 [2024-11-18 10:48:28.501774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.765 [2024-11-18 10:48:28.501786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:20:02.765 [2024-11-18 10:48:28.501793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.515478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.765 [2024-11-18 10:48:28.515640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.765 [2024-11-18 10:48:28.515659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.635 ms 00:20:02.765 [2024-11-18 10:48:28.515668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.516059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.765 [2024-11-18 10:48:28.516069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.765 [2024-11-18 10:48:28.516087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:02.765 [2024-11-18 10:48:28.516099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.552994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.765 [2024-11-18 10:48:28.553045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.765 [2024-11-18 10:48:28.553057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.765 [2024-11-18 10:48:28.553066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.553128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.765 [2024-11-18 10:48:28.553138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.765 [2024-11-18 10:48:28.553147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.765 [2024-11-18 10:48:28.553162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.553276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.765 [2024-11-18 10:48:28.553288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.765 [2024-11-18 10:48:28.553296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.765 [2024-11-18 10:48:28.553305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.553321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.765 [2024-11-18 10:48:28.553330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.765 [2024-11-18 10:48:28.553339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.765 [2024-11-18 10:48:28.553348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.765 [2024-11-18 10:48:28.636969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.765 [2024-11-18 10:48:28.637040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.765 [2024-11-18 10:48:28.637054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.765 [2024-11-18 10:48:28.637064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.706629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.706844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.026 [2024-11-18 10:48:28.706865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.706875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.706948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.706959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.026 [2024-11-18 10:48:28.706969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.706978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.707051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.026 [2024-11-18 10:48:28.707060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.707068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.707186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.026 [2024-11-18 10:48:28.707195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.707227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.707276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.026 [2024-11-18 10:48:28.707286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.707294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.707350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.026 [2024-11-18 10:48:28.707360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.707368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.026 [2024-11-18 10:48:28.707429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.026 [2024-11-18 10:48:28.707438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.026 [2024-11-18 10:48:28.707447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.026 [2024-11-18 10:48:28.707582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.200 ms, result 0 00:20:03.599 00:20:03.599 00:20:03.599 10:48:29 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.149 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:06.149 10:48:31 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:06.149 [2024-11-18 10:48:31.824399] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:06.149 [2024-11-18 10:48:31.824559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76072 ] 00:20:06.149 [2024-11-18 10:48:31.986675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.410 [2024-11-18 10:48:32.106834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.672 [2024-11-18 10:48:32.394274] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.672 [2024-11-18 10:48:32.394357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.935 [2024-11-18 10:48:32.555530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.555593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.935 [2024-11-18 10:48:32.555613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.935 [2024-11-18 10:48:32.555622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.555678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.555690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.935 [2024-11-18 10:48:32.555702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:06.935 [2024-11-18 10:48:32.555709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.555730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.935 [2024-11-18 10:48:32.556506] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.935 [2024-11-18 10:48:32.556530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.556539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.935 [2024-11-18 10:48:32.556548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:20:06.935 [2024-11-18 10:48:32.556557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.558255] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.935 [2024-11-18 10:48:32.572705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.572758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.935 [2024-11-18 10:48:32.572773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.451 ms 00:20:06.935 [2024-11-18 10:48:32.572781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.572865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.572876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.935 [2024-11-18 10:48:32.572885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:06.935 [2024-11-18 10:48:32.572893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.581080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.581124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.935 [2024-11-18 10:48:32.581135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.109 ms 00:20:06.935 [2024-11-18 10:48:32.581144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.581252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.581262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.935 [2024-11-18 10:48:32.581271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:06.935 [2024-11-18 10:48:32.581279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.581324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.935 [2024-11-18 10:48:32.581335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.935 [2024-11-18 10:48:32.581343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:06.935 [2024-11-18 10:48:32.581353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.935 [2024-11-18 10:48:32.581376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.936 [2024-11-18 10:48:32.585372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.936 [2024-11-18 10:48:32.585411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.936 [2024-11-18 10:48:32.585423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:20:06.936 [2024-11-18 10:48:32.585433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.936 [2024-11-18 10:48:32.585468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.936 [2024-11-18 10:48:32.585477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.936 [2024-11-18 10:48:32.585486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:06.936 [2024-11-18 10:48:32.585494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.936 [2024-11-18 10:48:32.585547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.936 [2024-11-18 10:48:32.585570] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.936 [2024-11-18 10:48:32.585608] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.936 [2024-11-18 10:48:32.585627] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.936 [2024-11-18 10:48:32.585734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.936 [2024-11-18 10:48:32.585746] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.936 [2024-11-18 10:48:32.585758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.936 [2024-11-18 10:48:32.585769] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.936 [2024-11-18 10:48:32.585778] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.936 [2024-11-18 10:48:32.585788] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.936 [2024-11-18 10:48:32.585797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.936 [2024-11-18 10:48:32.585805] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.936 [2024-11-18 10:48:32.585813] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.936 [2024-11-18 10:48:32.585824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.936 [2024-11-18 10:48:32.585832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.936 [2024-11-18 10:48:32.585839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:06.936 [2024-11-18 10:48:32.585846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.936 [2024-11-18 10:48:32.585929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.936 [2024-11-18 10:48:32.585939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.936 [2024-11-18 10:48:32.585947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:06.936 [2024-11-18 10:48:32.585954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.936 [2024-11-18 10:48:32.586059] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.936 [2024-11-18 10:48:32.586072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.936 [2024-11-18 10:48:32.586081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.936 [2024-11-18 10:48:32.586104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.936 [2024-11-18 10:48:32.586127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.936 [2024-11-18 10:48:32.586141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.936 [2024-11-18 10:48:32.586150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.936 [2024-11-18 10:48:32.586159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.936 [2024-11-18 10:48:32.586166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.936 [2024-11-18 10:48:32.586174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:06.936 [2024-11-18 10:48:32.586187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.936 [2024-11-18 10:48:32.586230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.936 [2024-11-18 10:48:32.586254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.936 [2024-11-18 10:48:32.586274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.936 [2024-11-18 10:48:32.586295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.936 [2024-11-18 10:48:32.586318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.936 [2024-11-18 10:48:32.586352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.936 [2024-11-18 10:48:32.586367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.936 [2024-11-18 10:48:32.586374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:06.936 [2024-11-18 10:48:32.586380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.936 [2024-11-18 10:48:32.586387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.936 [2024-11-18 10:48:32.586403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:06.936 [2024-11-18 10:48:32.586410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.936 [2024-11-18 10:48:32.586424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:06.936 [2024-11-18 10:48:32.586431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586438] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.936 [2024-11-18 10:48:32.586447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.936 [2024-11-18 10:48:32.586454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.936 [2024-11-18 10:48:32.586470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.936 [2024-11-18 10:48:32.586477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.936 [2024-11-18 10:48:32.586484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.936 [2024-11-18 10:48:32.586491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.936 [2024-11-18 10:48:32.586498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.936 [2024-11-18 10:48:32.586505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.936 [2024-11-18 10:48:32.586513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.936 [2024-11-18 10:48:32.586523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.936 [2024-11-18 10:48:32.586531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.936 [2024-11-18 10:48:32.586538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:06.936 [2024-11-18 10:48:32.586545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:06.936 [2024-11-18 10:48:32.586553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:06.936 [2024-11-18 10:48:32.586560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:06.936 [2024-11-18 10:48:32.586568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:06.936 [2024-11-18 10:48:32.586576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:06.936 [2024-11-18 10:48:32.586583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:06.936 [2024-11-18 10:48:32.586590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:06.936 [2024-11-18 10:48:32.586597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:06.936 [2024-11-18 10:48:32.586604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:06.936 [2024-11-18 10:48:32.586612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:06.936 [2024-11-18 10:48:32.586619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:06.936 [2024-11-18 10:48:32.586626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:06.937 [2024-11-18 10:48:32.586633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.937 [2024-11-18 10:48:32.586644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.937 [2024-11-18 10:48:32.586652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.937 [2024-11-18 10:48:32.586659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.937 [2024-11-18 10:48:32.586667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.937 [2024-11-18 10:48:32.586674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.937 [2024-11-18 10:48:32.586682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.586690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.937 [2024-11-18 10:48:32.586698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:06.937 [2024-11-18 10:48:32.586705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.618492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.618545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.937 [2024-11-18 10:48:32.618558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.740 ms 00:20:06.937 [2024-11-18 10:48:32.618567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.618665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.618675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.937 [2024-11-18 10:48:32.618685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:06.937 [2024-11-18 10:48:32.618695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.664849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.664904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.937 [2024-11-18 10:48:32.664919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.095 ms 00:20:06.937 [2024-11-18 10:48:32.664928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.664977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.664988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.937 [2024-11-18 10:48:32.664997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.937 [2024-11-18 10:48:32.665009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.665659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.665703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.937 [2024-11-18 10:48:32.665715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:06.937 [2024-11-18 10:48:32.665722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.665881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.665893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.937 [2024-11-18 10:48:32.665901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:06.937 [2024-11-18 10:48:32.665914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.681516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.681563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.937 [2024-11-18 10:48:32.681577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:20:06.937 [2024-11-18 10:48:32.681585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.696589] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:06.937 [2024-11-18 10:48:32.696811] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.937 [2024-11-18 10:48:32.696831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.696841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.937 [2024-11-18 10:48:32.696851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.143 ms 00:20:06.937 [2024-11-18 10:48:32.696860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.723190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.723399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.937 [2024-11-18 10:48:32.723421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.241 ms 00:20:06.937 [2024-11-18 10:48:32.723430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.736488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.736548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.937 [2024-11-18 10:48:32.736565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.933 ms 00:20:06.937 [2024-11-18 10:48:32.736573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.749023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.749071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.937 [2024-11-18 10:48:32.749083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.400 ms 00:20:06.937 [2024-11-18 10:48:32.749090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.937 [2024-11-18 10:48:32.749756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.937 [2024-11-18 10:48:32.749787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.937 [2024-11-18 10:48:32.749798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:06.937 [2024-11-18 10:48:32.749809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.816415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.816484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.199 [2024-11-18 10:48:32.816508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.585 ms 00:20:07.199 [2024-11-18 10:48:32.816518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.828084] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.199 [2024-11-18 10:48:32.831812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.831858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.199 [2024-11-18 10:48:32.831871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.234 ms 00:20:07.199 [2024-11-18 10:48:32.831880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.831971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.831984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.199 [2024-11-18 10:48:32.831993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:07.199 [2024-11-18 10:48:32.832004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.832076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.832087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.199 [2024-11-18 10:48:32.832096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:07.199 [2024-11-18 10:48:32.832105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.832126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.832135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.199 [2024-11-18 10:48:32.832144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.199 [2024-11-18 10:48:32.832152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.832189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.199 [2024-11-18 10:48:32.832222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.832232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.199 [2024-11-18 10:48:32.832241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:07.199 [2024-11-18 10:48:32.832249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.858304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.858490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.199 [2024-11-18 10:48:32.858514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.034 ms 00:20:07.199 [2024-11-18 10:48:32.858530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.858613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.199 [2024-11-18 10:48:32.858624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.199 [2024-11-18 10:48:32.858634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:07.199 [2024-11-18 10:48:32.858642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.199 [2024-11-18 10:48:32.859977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.950 ms, result 0 00:20:08.143  [2024-11-18T10:48:34.971Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T10:48:35.914Z] Copying: 30/1024 [MB] (19 MBps) [2024-11-18T10:48:37.302Z] Copying: 82/1024 [MB] (52 MBps) [2024-11-18T10:48:37.897Z] Copying: 103/1024 [MB] (21 MBps) [2024-11-18T10:48:39.344Z] Copying: 118/1024 [MB] (14 MBps) [2024-11-18T10:48:39.931Z] Copying: 134/1024 [MB] (15 MBps) [2024-11-18T10:48:40.873Z] Copying: 149/1024 [MB] (15 MBps) [2024-11-18T10:48:42.260Z] Copying: 163/1024 [MB] (14 MBps) [2024-11-18T10:48:43.204Z] Copying: 181/1024 [MB] (17 MBps) [2024-11-18T10:48:44.147Z] Copying: 193/1024 [MB] (12 MBps) [2024-11-18T10:48:45.088Z] Copying: 205/1024 [MB] (11 MBps) [2024-11-18T10:48:46.031Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-18T10:48:46.974Z] Copying: 232/1024 [MB] (16 MBps) [2024-11-18T10:48:47.917Z] Copying: 254/1024 [MB] (21 MBps) [2024-11-18T10:48:49.304Z] Copying: 273/1024 [MB] (19 MBps) [2024-11-18T10:48:49.876Z] Copying: 287/1024 [MB] (13 MBps) [2024-11-18T10:48:51.262Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-18T10:48:52.206Z] Copying: 309/1024 [MB] (10 MBps) [2024-11-18T10:48:53.149Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-18T10:48:54.095Z] Copying: 330/1024 [MB] (10 MBps) [2024-11-18T10:48:55.040Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-18T10:48:55.984Z] Copying: 351/1024 [MB] (10 MBps) [2024-11-18T10:48:56.929Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-18T10:48:58.315Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-18T10:48:58.888Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-18T10:49:00.272Z] Copying: 402712/1048576 [kB] (10084 kBps) [2024-11-18T10:49:01.218Z] Copying: 409/1024 [MB] (16 MBps) [2024-11-18T10:49:02.161Z] Copying: 430/1024 [MB] (20 MBps) [2024-11-18T10:49:03.106Z] Copying: 441/1024 [MB] (11 MBps) [2024-11-18T10:49:04.051Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-18T10:49:04.995Z] Copying: 468/1024 [MB] (16 MBps) [2024-11-18T10:49:05.940Z] Copying: 488/1024 [MB] (19 MBps) [2024-11-18T10:49:06.883Z] Copying: 506/1024 [MB] (17 MBps) [2024-11-18T10:49:08.269Z] Copying: 523/1024 [MB] (16 MBps) [2024-11-18T10:49:09.211Z] Copying: 540/1024 [MB] (17 MBps) [2024-11-18T10:49:10.153Z] Copying: 559/1024 [MB] (18 MBps) [2024-11-18T10:49:11.102Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-18T10:49:12.109Z] Copying: 599/1024 [MB] (23 MBps) [2024-11-18T10:49:13.054Z] Copying: 627/1024 [MB] (28 MBps) [2024-11-18T10:49:13.996Z] Copying: 644/1024 [MB] (16 MBps) [2024-11-18T10:49:14.939Z] Copying: 656/1024 [MB] (12 MBps) [2024-11-18T10:49:15.882Z] Copying: 675/1024 [MB] (18 MBps) [2024-11-18T10:49:17.269Z] Copying: 694/1024 [MB] (19 MBps) [2024-11-18T10:49:18.213Z] Copying: 719/1024 [MB] (24 MBps) [2024-11-18T10:49:19.156Z] Copying: 737/1024 [MB] (18 MBps) [2024-11-18T10:49:20.096Z] Copying: 760/1024 [MB] (22 MBps) [2024-11-18T10:49:21.040Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-18T10:49:21.984Z] Copying: 790/1024 [MB] (12 MBps) [2024-11-18T10:49:22.929Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-18T10:49:23.873Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-18T10:49:25.262Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-18T10:49:26.205Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-18T10:49:27.149Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-18T10:49:28.093Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-18T10:49:29.038Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-18T10:49:29.983Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-18T10:49:30.924Z] Copying: 882/1024 [MB] (10 MBps) [2024-11-18T10:49:32.312Z] Copying: 914/1024 [MB] (31 MBps) [2024-11-18T10:49:32.884Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-18T10:49:34.270Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-18T10:49:35.213Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-18T10:49:36.157Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-18T10:49:37.101Z] Copying: 970/1024 [MB] (12 MBps) [2024-11-18T10:49:38.044Z] Copying: 980/1024 [MB] (10 MBps) [2024-11-18T10:49:38.986Z] Copying: 992/1024 [MB] (12 MBps) [2024-11-18T10:49:39.931Z] Copying: 1005/1024 [MB] (12 MBps) [2024-11-18T10:49:40.875Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-18T10:49:40.875Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 10:49:40.641197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.641249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.991 [2024-11-18 10:49:40.641261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.991 [2024-11-18 10:49:40.641273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.642340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.991 [2024-11-18 10:49:40.645486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.645514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.991 [2024-11-18 10:49:40.645523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:21:14.991 [2024-11-18 10:49:40.645531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.654506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.654534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.991 [2024-11-18 10:49:40.654542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:21:14.991 [2024-11-18 10:49:40.654548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.670493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.670519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.991 [2024-11-18 10:49:40.670527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.928 ms 00:21:14.991 [2024-11-18 10:49:40.670533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.675321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.675424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.991 [2024-11-18 10:49:40.675436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:21:14.991 [2024-11-18 10:49:40.675443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.693613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.693639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.991 [2024-11-18 10:49:40.693647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.127 ms 00:21:14.991 [2024-11-18 10:49:40.693653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.704748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.704775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.991 [2024-11-18 10:49:40.704784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.071 ms 00:21:14.991 [2024-11-18 10:49:40.704790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.991 [2024-11-18 10:49:40.757982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.991 [2024-11-18 10:49:40.758020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.991 [2024-11-18 10:49:40.758028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.163 ms 00:21:14.992 [2024-11-18 10:49:40.758034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.992 [2024-11-18 10:49:40.775906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.992 [2024-11-18 10:49:40.775931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.992 [2024-11-18 10:49:40.775940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:21:14.992 [2024-11-18 10:49:40.775945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.992 [2024-11-18 10:49:40.793179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.992 [2024-11-18 10:49:40.793219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.992 [2024-11-18 10:49:40.793227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:21:14.992 [2024-11-18 10:49:40.793232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.992 [2024-11-18 10:49:40.810171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.992 [2024-11-18 10:49:40.810196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.992 [2024-11-18 10:49:40.810214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:21:14.992 [2024-11-18 10:49:40.810220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.992 [2024-11-18 10:49:40.827558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.992 [2024-11-18 10:49:40.827668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.992 [2024-11-18 10:49:40.827680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.297 ms 00:21:14.992 [2024-11-18 10:49:40.827686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.992 [2024-11-18 10:49:40.827707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.992 [2024-11-18 10:49:40.827718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117504 / 261120 wr_cnt: 1 state: open 00:21:14.992 [2024-11-18 10:49:40.827725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.992 [2024-11-18 10:49:40.827989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.827994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.993 [2024-11-18 10:49:40.828312] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.993 [2024-11-18 10:49:40.828318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f09a5254-e303-4f50-a673-b0726b000e27 00:21:14.993 [2024-11-18 10:49:40.828324] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117504 00:21:14.993 [2024-11-18 10:49:40.828330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118464 00:21:14.993 [2024-11-18 10:49:40.828335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117504 00:21:14.993 [2024-11-18 10:49:40.828341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:21:14.993 [2024-11-18 10:49:40.828346] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.993 [2024-11-18 10:49:40.828355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.993 [2024-11-18 10:49:40.828365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.993 [2024-11-18 10:49:40.828370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.993 [2024-11-18 10:49:40.828375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.993 [2024-11-18 10:49:40.828380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.993 [2024-11-18 10:49:40.828386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.993 [2024-11-18 10:49:40.828392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:21:14.993 [2024-11-18 10:49:40.828398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.993 [2024-11-18 10:49:40.837863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.993 [2024-11-18 10:49:40.837887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.993 [2024-11-18 10:49:40.837894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.453 ms 00:21:14.993 [2024-11-18 10:49:40.837904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.993 [2024-11-18 10:49:40.838166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.993 [2024-11-18 10:49:40.838173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.993 [2024-11-18 10:49:40.838179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:21:14.993 [2024-11-18 10:49:40.838184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.993 [2024-11-18 10:49:40.863798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.993 [2024-11-18 10:49:40.863824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.994 [2024-11-18 10:49:40.863835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.994 [2024-11-18 10:49:40.863841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.994 [2024-11-18 10:49:40.863879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.994 [2024-11-18 10:49:40.863885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.994 [2024-11-18 10:49:40.863891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.994 [2024-11-18 10:49:40.863896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.994 [2024-11-18 10:49:40.863939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.994 [2024-11-18 10:49:40.863947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.994 [2024-11-18 10:49:40.863953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.994 [2024-11-18 10:49:40.863961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.994 [2024-11-18 10:49:40.863972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.994 [2024-11-18 10:49:40.863978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.994 [2024-11-18 10:49:40.863983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.994 [2024-11-18 10:49:40.863989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.923264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.923299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.255 [2024-11-18 10:49:40.923312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.923318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.971959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.971993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.255 [2024-11-18 10:49:40.972002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.255 [2024-11-18 10:49:40.972082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.255 [2024-11-18 10:49:40.972130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.255 [2024-11-18 10:49:40.972236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:15.255 [2024-11-18 10:49:40.972281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.255 [2024-11-18 10:49:40.972328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.255 [2024-11-18 10:49:40.972375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.255 [2024-11-18 10:49:40.972381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.255 [2024-11-18 10:49:40.972387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.255 [2024-11-18 10:49:40.972485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.670 ms, result 0 00:21:17.180 00:21:17.180 00:21:17.180 10:49:42 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:17.180 [2024-11-18 10:49:42.630453] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:17.180 [2024-11-18 10:49:42.630582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76799 ] 00:21:17.180 [2024-11-18 10:49:42.786354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.180 [2024-11-18 10:49:42.861949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.483 [2024-11-18 10:49:43.067002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:17.483 [2024-11-18 10:49:43.067044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:17.483 [2024-11-18 10:49:43.214268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.483 [2024-11-18 10:49:43.214299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:17.483 [2024-11-18 10:49:43.214312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:17.483 [2024-11-18 10:49:43.214318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.214351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.214358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.484 [2024-11-18 10:49:43.214367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:17.484 [2024-11-18 10:49:43.214372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.214384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:17.484 [2024-11-18 10:49:43.214875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:17.484 [2024-11-18 10:49:43.214891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.214897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.484 [2024-11-18 10:49:43.214904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:21:17.484 [2024-11-18 10:49:43.214909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.215893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:17.484 [2024-11-18 10:49:43.226041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.226066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:17.484 [2024-11-18 10:49:43.226076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.149 ms 00:21:17.484 [2024-11-18 10:49:43.226082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.226131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.226139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:17.484 [2024-11-18 10:49:43.226145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:17.484 [2024-11-18 10:49:43.226150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.230477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.230498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.484 [2024-11-18 10:49:43.230505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:21:17.484 [2024-11-18 10:49:43.230511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.230566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.230572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.484 [2024-11-18 10:49:43.230578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:17.484 [2024-11-18 10:49:43.230585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.230623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.230630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:17.484 [2024-11-18 10:49:43.230636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:17.484 [2024-11-18 10:49:43.230642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.230657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.484 [2024-11-18 10:49:43.233249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.233269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.484 [2024-11-18 10:49:43.233275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:21:17.484 [2024-11-18 10:49:43.233283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.233307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.233314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:17.484 [2024-11-18 10:49:43.233320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:17.484 [2024-11-18 10:49:43.233326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.233339] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:17.484 [2024-11-18 10:49:43.233352] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:17.484 [2024-11-18 10:49:43.233378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:17.484 [2024-11-18 10:49:43.233391] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:17.484 [2024-11-18 10:49:43.233469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:17.484 [2024-11-18 10:49:43.233482] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:17.484 [2024-11-18 10:49:43.233490] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:17.484 [2024-11-18 10:49:43.233497] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233510] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:17.484 [2024-11-18 10:49:43.233516] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:17.484 [2024-11-18 10:49:43.233522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:17.484 [2024-11-18 10:49:43.233527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:17.484 [2024-11-18 10:49:43.233534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.233540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:17.484 [2024-11-18 10:49:43.233546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:21:17.484 [2024-11-18 10:49:43.233551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.233613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.484 [2024-11-18 10:49:43.233622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:17.484 [2024-11-18 10:49:43.233628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:17.484 [2024-11-18 10:49:43.233633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.484 [2024-11-18 10:49:43.233708] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:17.484 [2024-11-18 10:49:43.233717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:17.484 [2024-11-18 10:49:43.233723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:17.484 [2024-11-18 10:49:43.233740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:17.484 [2024-11-18 10:49:43.233756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.484 [2024-11-18 10:49:43.233767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:17.484 [2024-11-18 10:49:43.233772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:17.484 [2024-11-18 10:49:43.233777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.484 [2024-11-18 10:49:43.233782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:17.484 [2024-11-18 10:49:43.233788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:17.484 [2024-11-18 10:49:43.233797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:17.484 [2024-11-18 10:49:43.233807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:17.484 [2024-11-18 10:49:43.233822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:17.484 [2024-11-18 10:49:43.233836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:17.484 [2024-11-18 10:49:43.233840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.484 [2024-11-18 10:49:43.233845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:17.484 [2024-11-18 10:49:43.233850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.485 [2024-11-18 10:49:43.233859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:17.485 [2024-11-18 10:49:43.233864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.485 [2024-11-18 10:49:43.233874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:17.485 [2024-11-18 10:49:43.233879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.485 [2024-11-18 10:49:43.233888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:17.485 [2024-11-18 10:49:43.233893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:17.485 [2024-11-18 10:49:43.233898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.485 [2024-11-18 10:49:43.233903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:17.485 [2024-11-18 10:49:43.233908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:17.485 [2024-11-18 10:49:43.233913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:17.485 [2024-11-18 10:49:43.233924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:17.485 [2024-11-18 10:49:43.233929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233934] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:17.485 [2024-11-18 10:49:43.233940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:17.485 [2024-11-18 10:49:43.233946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.485 [2024-11-18 10:49:43.233951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.485 [2024-11-18 10:49:43.233957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:17.485 [2024-11-18 10:49:43.233962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:17.485 [2024-11-18 10:49:43.233967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:17.485 [2024-11-18 10:49:43.233973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:17.485 [2024-11-18 10:49:43.233977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:17.485 [2024-11-18 10:49:43.233982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:17.485 [2024-11-18 10:49:43.233989] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:17.485 [2024-11-18 10:49:43.233995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:17.485 [2024-11-18 10:49:43.234007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:17.485 [2024-11-18 10:49:43.234013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:17.485 [2024-11-18 10:49:43.234018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:17.485 [2024-11-18 10:49:43.234023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:17.485 [2024-11-18 10:49:43.234029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:17.485 [2024-11-18 10:49:43.234034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:17.485 [2024-11-18 10:49:43.234039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:17.485 [2024-11-18 10:49:43.234045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:17.485 [2024-11-18 10:49:43.234051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:17.485 [2024-11-18 10:49:43.234077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:17.485 [2024-11-18 10:49:43.234085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:17.485 [2024-11-18 10:49:43.234097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:17.485 [2024-11-18 10:49:43.234102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:17.485 [2024-11-18 10:49:43.234108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:17.485 [2024-11-18 10:49:43.234113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.485 [2024-11-18 10:49:43.234119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:17.485 [2024-11-18 10:49:43.234124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:21:17.485 [2024-11-18 10:49:43.234130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.485 [2024-11-18 10:49:43.254770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.485 [2024-11-18 10:49:43.254792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.485 [2024-11-18 10:49:43.254800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.611 ms 00:21:17.485 [2024-11-18 10:49:43.254805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.485 [2024-11-18 10:49:43.254867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.485 [2024-11-18 10:49:43.254874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:17.485 [2024-11-18 10:49:43.254879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:17.485 [2024-11-18 10:49:43.254885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.485 [2024-11-18 10:49:43.301050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.485 [2024-11-18 10:49:43.301078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.485 [2024-11-18 10:49:43.301087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.130 ms 00:21:17.485 [2024-11-18 10:49:43.301093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.485 [2024-11-18 10:49:43.301117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.485 [2024-11-18 10:49:43.301123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.485 [2024-11-18 10:49:43.301130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:17.486 [2024-11-18 10:49:43.301138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.486 [2024-11-18 10:49:43.301453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.486 [2024-11-18 10:49:43.301471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.486 [2024-11-18 10:49:43.301479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:17.486 [2024-11-18 10:49:43.301484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.486 [2024-11-18 10:49:43.301579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.486 [2024-11-18 10:49:43.301586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.486 [2024-11-18 10:49:43.301593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:17.486 [2024-11-18 10:49:43.301598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.486 [2024-11-18 10:49:43.312081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.486 [2024-11-18 10:49:43.312104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.486 [2024-11-18 10:49:43.312111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.463 ms 00:21:17.486 [2024-11-18 10:49:43.312120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.486 [2024-11-18 10:49:43.321744] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:17.486 [2024-11-18 10:49:43.321767] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:17.486 [2024-11-18 10:49:43.321776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.486 [2024-11-18 10:49:43.321783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:17.486 [2024-11-18 10:49:43.321789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.580 ms 00:21:17.486 [2024-11-18 10:49:43.321795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.486 [2024-11-18 10:49:43.340305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.486 [2024-11-18 10:49:43.340333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:17.486 [2024-11-18 10:49:43.340341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.481 ms 00:21:17.486 [2024-11-18 10:49:43.340348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.349266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.349293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:17.771 [2024-11-18 10:49:43.349300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.887 ms 00:21:17.771 [2024-11-18 10:49:43.349306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.358127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.358148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:17.771 [2024-11-18 10:49:43.358155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.797 ms 00:21:17.771 [2024-11-18 10:49:43.358160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.358609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.358626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:17.771 [2024-11-18 10:49:43.358633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:21:17.771 [2024-11-18 10:49:43.358644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.401469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.401505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:17.771 [2024-11-18 10:49:43.401520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.812 ms 00:21:17.771 [2024-11-18 10:49:43.401526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.409400] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:17.771 [2024-11-18 10:49:43.411130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.411152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:17.771 [2024-11-18 10:49:43.411160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.570 ms 00:21:17.771 [2024-11-18 10:49:43.411167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.411230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.411239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:17.771 [2024-11-18 10:49:43.411247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:17.771 [2024-11-18 10:49:43.411255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.412368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.412388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:17.771 [2024-11-18 10:49:43.412396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:21:17.771 [2024-11-18 10:49:43.412402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.412428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.412434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:17.771 [2024-11-18 10:49:43.412441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:17.771 [2024-11-18 10:49:43.412447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.412501] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:17.771 [2024-11-18 10:49:43.412511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.412516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:17.771 [2024-11-18 10:49:43.412522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:17.771 [2024-11-18 10:49:43.412528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.430015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.430038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:17.771 [2024-11-18 10:49:43.430047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.473 ms 00:21:17.771 [2024-11-18 10:49:43.430057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.430111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.771 [2024-11-18 10:49:43.430118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:17.771 [2024-11-18 10:49:43.430124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:17.771 [2024-11-18 10:49:43.430129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.771 [2024-11-18 10:49:43.431178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.575 ms, result 0 00:21:18.714  [2024-11-18T10:49:45.987Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T10:49:46.931Z] Copying: 32/1024 [MB] (13 MBps) [2024-11-18T10:49:47.876Z] Copying: 46/1024 [MB] (14 MBps) [2024-11-18T10:49:48.821Z] Copying: 63/1024 [MB] (16 MBps) [2024-11-18T10:49:49.764Z] Copying: 82/1024 [MB] (18 MBps) [2024-11-18T10:49:50.707Z] Copying: 98/1024 [MB] (16 MBps) [2024-11-18T10:49:51.652Z] Copying: 114/1024 [MB] (15 MBps) [2024-11-18T10:49:52.596Z] Copying: 129/1024 [MB] (14 MBps) [2024-11-18T10:49:53.982Z] Copying: 159/1024 [MB] (30 MBps) [2024-11-18T10:49:54.927Z] Copying: 183/1024 [MB] (23 MBps) [2024-11-18T10:49:55.871Z] Copying: 197/1024 [MB] (14 MBps) [2024-11-18T10:49:56.815Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-18T10:49:57.760Z] Copying: 221/1024 [MB] (13 MBps) [2024-11-18T10:49:58.704Z] Copying: 236/1024 [MB] (14 MBps) [2024-11-18T10:49:59.647Z] Copying: 248/1024 [MB] (12 MBps) [2024-11-18T10:50:00.592Z] Copying: 263/1024 [MB] (14 MBps) [2024-11-18T10:50:01.980Z] Copying: 277/1024 [MB] (14 MBps) [2024-11-18T10:50:02.923Z] Copying: 291/1024 [MB] (13 MBps) [2024-11-18T10:50:03.867Z] Copying: 320/1024 [MB] (29 MBps) [2024-11-18T10:50:04.811Z] Copying: 341/1024 [MB] (21 MBps) [2024-11-18T10:50:05.754Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-18T10:50:06.701Z] Copying: 370/1024 [MB] (17 MBps) [2024-11-18T10:50:07.641Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-18T10:50:08.585Z] Copying: 401/1024 [MB] (14 MBps) [2024-11-18T10:50:09.972Z] Copying: 419/1024 [MB] (18 MBps) [2024-11-18T10:50:10.967Z] Copying: 439/1024 [MB] (19 MBps) [2024-11-18T10:50:11.933Z] Copying: 456/1024 [MB] (16 MBps) [2024-11-18T10:50:12.877Z] Copying: 475/1024 [MB] (19 MBps) [2024-11-18T10:50:13.821Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-18T10:50:14.767Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-18T10:50:15.725Z] Copying: 511/1024 [MB] (14 MBps) [2024-11-18T10:50:16.668Z] Copying: 522/1024 [MB] (10 MBps) [2024-11-18T10:50:17.612Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-18T10:50:18.998Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-18T10:50:19.940Z] Copying: 560/1024 [MB] (16 MBps) [2024-11-18T10:50:20.882Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-18T10:50:21.825Z] Copying: 596/1024 [MB] (18 MBps) [2024-11-18T10:50:22.769Z] Copying: 612/1024 [MB] (16 MBps) [2024-11-18T10:50:23.713Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-18T10:50:24.658Z] Copying: 650/1024 [MB] (20 MBps) [2024-11-18T10:50:25.602Z] Copying: 663/1024 [MB] (13 MBps) [2024-11-18T10:50:26.988Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-18T10:50:27.933Z] Copying: 703/1024 [MB] (20 MBps) [2024-11-18T10:50:28.890Z] Copying: 717/1024 [MB] (13 MBps) [2024-11-18T10:50:29.835Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-18T10:50:30.777Z] Copying: 751/1024 [MB] (17 MBps) [2024-11-18T10:50:31.718Z] Copying: 770/1024 [MB] (19 MBps) [2024-11-18T10:50:32.660Z] Copying: 790/1024 [MB] (19 MBps) [2024-11-18T10:50:33.603Z] Copying: 804/1024 [MB] (13 MBps) [2024-11-18T10:50:34.993Z] Copying: 818/1024 [MB] (13 MBps) [2024-11-18T10:50:35.939Z] Copying: 832/1024 [MB] (14 MBps) [2024-11-18T10:50:36.884Z] Copying: 843/1024 [MB] (10 MBps) [2024-11-18T10:50:37.828Z] Copying: 873/1024 [MB] (30 MBps) [2024-11-18T10:50:38.772Z] Copying: 886/1024 [MB] (12 MBps) [2024-11-18T10:50:39.716Z] Copying: 899/1024 [MB] (12 MBps) [2024-11-18T10:50:40.756Z] Copying: 920/1024 [MB] (20 MBps) [2024-11-18T10:50:41.697Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-18T10:50:42.640Z] Copying: 947/1024 [MB] (16 MBps) [2024-11-18T10:50:43.584Z] Copying: 965/1024 [MB] (17 MBps) [2024-11-18T10:50:44.971Z] Copying: 983/1024 [MB] (17 MBps) [2024-11-18T10:50:45.913Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-18T10:50:46.175Z] Copying: 1017/1024 [MB] (18 MBps) [2024-11-18T10:50:46.436Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 10:50:46.306911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.307287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:20.552 [2024-11-18 10:50:46.307378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:20.552 [2024-11-18 10:50:46.307405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.307471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.552 [2024-11-18 10:50:46.310879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.311062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:20.552 [2024-11-18 10:50:46.311336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:22:20.552 [2024-11-18 10:50:46.311362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.311627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.311642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:20.552 [2024-11-18 10:50:46.311652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:22:20.552 [2024-11-18 10:50:46.311661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.317163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.317311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:20.552 [2024-11-18 10:50:46.317324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.482 ms 00:22:20.552 [2024-11-18 10:50:46.317332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.324015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.324052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:20.552 [2024-11-18 10:50:46.324065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.639 ms 00:22:20.552 [2024-11-18 10:50:46.324073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.353054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.353099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:20.552 [2024-11-18 10:50:46.353113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.899 ms 00:22:20.552 [2024-11-18 10:50:46.353121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.552 [2024-11-18 10:50:46.368658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.552 [2024-11-18 10:50:46.368709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:20.552 [2024-11-18 10:50:46.368723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.487 ms 00:22:20.552 [2024-11-18 10:50:46.368733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.460288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.815 [2024-11-18 10:50:46.460336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:20.815 [2024-11-18 10:50:46.460347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.502 ms 00:22:20.815 [2024-11-18 10:50:46.460355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.484521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.815 [2024-11-18 10:50:46.484556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:20.815 [2024-11-18 10:50:46.484567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.152 ms 00:22:20.815 [2024-11-18 10:50:46.484575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.508751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.815 [2024-11-18 10:50:46.508796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:20.815 [2024-11-18 10:50:46.508818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.141 ms 00:22:20.815 [2024-11-18 10:50:46.508825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.533182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.815 [2024-11-18 10:50:46.533231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:20.815 [2024-11-18 10:50:46.533243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.317 ms 00:22:20.815 [2024-11-18 10:50:46.533251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.557475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.815 [2024-11-18 10:50:46.557514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:20.815 [2024-11-18 10:50:46.557526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.152 ms 00:22:20.815 [2024-11-18 10:50:46.557534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.815 [2024-11-18 10:50:46.557578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:20.815 [2024-11-18 10:50:46.557594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:20.815 [2024-11-18 10:50:46.557606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:20.815 [2024-11-18 10:50:46.557614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:20.815 [2024-11-18 10:50:46.557623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.557995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:20.816 [2024-11-18 10:50:46.558339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:20.817 [2024-11-18 10:50:46.558420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:20.817 [2024-11-18 10:50:46.558429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f09a5254-e303-4f50-a673-b0726b000e27 00:22:20.817 [2024-11-18 10:50:46.558438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:20.817 [2024-11-18 10:50:46.558446] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14528 00:22:20.817 [2024-11-18 10:50:46.558454] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13568 00:22:20.817 [2024-11-18 10:50:46.558463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0708 00:22:20.817 [2024-11-18 10:50:46.558470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:20.817 [2024-11-18 10:50:46.558485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:20.817 [2024-11-18 10:50:46.558493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:20.817 [2024-11-18 10:50:46.558507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:20.817 [2024-11-18 10:50:46.558514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:20.817 [2024-11-18 10:50:46.558521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-18 10:50:46.558529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:20.817 [2024-11-18 10:50:46.558539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:22:20.817 [2024-11-18 10:50:46.558548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.572702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-18 10:50:46.572738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:20.817 [2024-11-18 10:50:46.572750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.135 ms 00:22:20.817 [2024-11-18 10:50:46.572765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.573172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.817 [2024-11-18 10:50:46.573182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:20.817 [2024-11-18 10:50:46.573192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:22:20.817 [2024-11-18 10:50:46.573199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.609937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.817 [2024-11-18 10:50:46.609982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.817 [2024-11-18 10:50:46.609998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.817 [2024-11-18 10:50:46.610007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.610069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.817 [2024-11-18 10:50:46.610079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.817 [2024-11-18 10:50:46.610089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.817 [2024-11-18 10:50:46.610098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.610180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.817 [2024-11-18 10:50:46.610191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.817 [2024-11-18 10:50:46.610201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.817 [2024-11-18 10:50:46.610233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.610251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.817 [2024-11-18 10:50:46.610259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.817 [2024-11-18 10:50:46.610269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.817 [2024-11-18 10:50:46.610278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.817 [2024-11-18 10:50:46.695535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.817 [2024-11-18 10:50:46.695578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.817 [2024-11-18 10:50:46.695597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.817 [2024-11-18 10:50:46.695606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.766842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.766891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.078 [2024-11-18 10:50:46.766905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.766913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.766972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.766982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.078 [2024-11-18 10:50:46.766991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.767076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.078 [2024-11-18 10:50:46.767085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.767231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.078 [2024-11-18 10:50:46.767241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.767295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.078 [2024-11-18 10:50:46.767304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.767364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.078 [2024-11-18 10:50:46.767373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.078 [2024-11-18 10:50:46.767439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.078 [2024-11-18 10:50:46.767448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.078 [2024-11-18 10:50:46.767456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.078 [2024-11-18 10:50:46.767593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.643 ms, result 0 00:22:21.651 00:22:21.651 00:22:21.651 10:50:47 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:24.196 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74373 00:22:24.196 10:50:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74373 ']' 00:22:24.196 Process with pid 74373 is not found 00:22:24.196 10:50:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74373 00:22:24.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74373) - No such process 00:22:24.196 10:50:49 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74373 is not found' 00:22:24.196 Remove shared memory files 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:24.196 10:50:49 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:24.196 00:22:24.196 real 5m1.600s 00:22:24.196 user 4m48.159s 00:22:24.196 sys 0m12.788s 00:22:24.196 10:50:49 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.196 ************************************ 00:22:24.196 END TEST ftl_restore 00:22:24.196 ************************************ 00:22:24.196 10:50:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:24.196 10:50:49 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:24.196 10:50:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:24.196 10:50:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.196 10:50:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:24.196 ************************************ 00:22:24.196 START TEST ftl_dirty_shutdown 00:22:24.196 ************************************ 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:24.196 * Looking for test storage... 00:22:24.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.196 --rc genhtml_branch_coverage=1 00:22:24.196 --rc genhtml_function_coverage=1 00:22:24.196 --rc genhtml_legend=1 00:22:24.196 --rc geninfo_all_blocks=1 00:22:24.196 --rc geninfo_unexecuted_blocks=1 00:22:24.196 00:22:24.196 ' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.196 --rc genhtml_branch_coverage=1 00:22:24.196 --rc genhtml_function_coverage=1 00:22:24.196 --rc genhtml_legend=1 00:22:24.196 --rc geninfo_all_blocks=1 00:22:24.196 --rc geninfo_unexecuted_blocks=1 00:22:24.196 00:22:24.196 ' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.196 --rc genhtml_branch_coverage=1 00:22:24.196 --rc genhtml_function_coverage=1 00:22:24.196 --rc genhtml_legend=1 00:22:24.196 --rc geninfo_all_blocks=1 00:22:24.196 --rc geninfo_unexecuted_blocks=1 00:22:24.196 00:22:24.196 ' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.196 --rc genhtml_branch_coverage=1 00:22:24.196 --rc genhtml_function_coverage=1 00:22:24.196 --rc genhtml_legend=1 00:22:24.196 --rc geninfo_all_blocks=1 00:22:24.196 --rc geninfo_unexecuted_blocks=1 00:22:24.196 00:22:24.196 ' 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:24.196 10:50:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:24.196 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:24.196 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:24.196 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77556 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77556 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77556 ']' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.197 10:50:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.458 [2024-11-18 10:50:50.113042] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:24.458 [2024-11-18 10:50:50.113199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77556 ] 00:22:24.458 [2024-11-18 10:50:50.279496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.718 [2024-11-18 10:50:50.400140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.290 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:25.291 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:25.552 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:25.813 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:25.813 { 00:22:25.813 "name": "nvme0n1", 00:22:25.813 "aliases": [ 00:22:25.813 "f9bc44c1-c1fa-4322-9659-8ba80c5d13fd" 00:22:25.813 ], 00:22:25.813 "product_name": "NVMe disk", 00:22:25.813 "block_size": 4096, 00:22:25.813 "num_blocks": 1310720, 00:22:25.813 "uuid": "f9bc44c1-c1fa-4322-9659-8ba80c5d13fd", 00:22:25.813 "numa_id": -1, 00:22:25.813 "assigned_rate_limits": { 00:22:25.813 "rw_ios_per_sec": 0, 00:22:25.813 "rw_mbytes_per_sec": 0, 00:22:25.813 "r_mbytes_per_sec": 0, 00:22:25.813 "w_mbytes_per_sec": 0 00:22:25.813 }, 00:22:25.813 "claimed": true, 00:22:25.813 "claim_type": "read_many_write_one", 00:22:25.813 "zoned": false, 00:22:25.813 "supported_io_types": { 00:22:25.813 "read": true, 00:22:25.813 "write": true, 00:22:25.813 "unmap": true, 00:22:25.813 "flush": true, 00:22:25.813 "reset": true, 00:22:25.813 "nvme_admin": true, 00:22:25.813 "nvme_io": true, 00:22:25.813 "nvme_io_md": false, 00:22:25.813 "write_zeroes": true, 00:22:25.813 "zcopy": false, 00:22:25.813 "get_zone_info": false, 00:22:25.813 "zone_management": false, 00:22:25.813 "zone_append": false, 00:22:25.813 "compare": true, 00:22:25.813 "compare_and_write": false, 00:22:25.813 "abort": true, 00:22:25.813 "seek_hole": false, 00:22:25.813 "seek_data": false, 00:22:25.813 "copy": true, 00:22:25.813 "nvme_iov_md": false 00:22:25.813 }, 00:22:25.813 "driver_specific": { 00:22:25.813 "nvme": [ 00:22:25.813 { 00:22:25.813 "pci_address": "0000:00:11.0", 00:22:25.813 "trid": { 00:22:25.813 "trtype": "PCIe", 00:22:25.813 "traddr": "0000:00:11.0" 00:22:25.813 }, 00:22:25.813 "ctrlr_data": { 00:22:25.813 "cntlid": 0, 00:22:25.813 "vendor_id": "0x1b36", 00:22:25.813 "model_number": "QEMU NVMe Ctrl", 00:22:25.813 "serial_number": "12341", 00:22:25.813 "firmware_revision": "8.0.0", 00:22:25.813 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:25.813 "oacs": { 00:22:25.813 "security": 0, 00:22:25.814 "format": 1, 00:22:25.814 "firmware": 0, 00:22:25.814 "ns_manage": 1 00:22:25.814 }, 00:22:25.814 "multi_ctrlr": false, 00:22:25.814 "ana_reporting": false 00:22:25.814 }, 00:22:25.814 "vs": { 00:22:25.814 "nvme_version": "1.4" 00:22:25.814 }, 00:22:25.814 "ns_data": { 00:22:25.814 "id": 1, 00:22:25.814 "can_share": false 00:22:25.814 } 00:22:25.814 } 00:22:25.814 ], 00:22:25.814 "mp_policy": "active_passive" 00:22:25.814 } 00:22:25.814 } 00:22:25.814 ]' 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:25.814 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:26.075 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=419efb89-10f1-4097-bc52-2c622c8678c9 00:22:26.075 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:26.075 10:50:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 419efb89-10f1-4097-bc52-2c622c8678c9 00:22:26.337 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:26.598 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85 00:22:26.598 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:26.860 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.122 { 00:22:27.122 "name": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:27.122 "aliases": [ 00:22:27.122 "lvs/nvme0n1p0" 00:22:27.122 ], 00:22:27.122 "product_name": "Logical Volume", 00:22:27.122 "block_size": 4096, 00:22:27.122 "num_blocks": 26476544, 00:22:27.122 "uuid": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:27.122 "assigned_rate_limits": { 00:22:27.122 "rw_ios_per_sec": 0, 00:22:27.122 "rw_mbytes_per_sec": 0, 00:22:27.122 "r_mbytes_per_sec": 0, 00:22:27.122 "w_mbytes_per_sec": 0 00:22:27.122 }, 00:22:27.122 "claimed": false, 00:22:27.122 "zoned": false, 00:22:27.122 "supported_io_types": { 00:22:27.122 "read": true, 00:22:27.122 "write": true, 00:22:27.122 "unmap": true, 00:22:27.122 "flush": false, 00:22:27.122 "reset": true, 00:22:27.122 "nvme_admin": false, 00:22:27.122 "nvme_io": false, 00:22:27.122 "nvme_io_md": false, 00:22:27.122 "write_zeroes": true, 00:22:27.122 "zcopy": false, 00:22:27.122 "get_zone_info": false, 00:22:27.122 "zone_management": false, 00:22:27.122 "zone_append": false, 00:22:27.122 "compare": false, 00:22:27.122 "compare_and_write": false, 00:22:27.122 "abort": false, 00:22:27.122 "seek_hole": true, 00:22:27.122 "seek_data": true, 00:22:27.122 "copy": false, 00:22:27.122 "nvme_iov_md": false 00:22:27.122 }, 00:22:27.122 "driver_specific": { 00:22:27.122 "lvol": { 00:22:27.122 "lvol_store_uuid": "e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85", 00:22:27.122 "base_bdev": "nvme0n1", 00:22:27.122 "thin_provision": true, 00:22:27.122 "num_allocated_clusters": 0, 00:22:27.122 "snapshot": false, 00:22:27.122 "clone": false, 00:22:27.122 "esnap_clone": false 00:22:27.122 } 00:22:27.122 } 00:22:27.122 } 00:22:27.122 ]' 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:27.122 10:50:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:27.384 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.644 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.644 { 00:22:27.644 "name": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:27.644 "aliases": [ 00:22:27.644 "lvs/nvme0n1p0" 00:22:27.644 ], 00:22:27.644 "product_name": "Logical Volume", 00:22:27.644 "block_size": 4096, 00:22:27.644 "num_blocks": 26476544, 00:22:27.644 "uuid": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:27.644 "assigned_rate_limits": { 00:22:27.644 "rw_ios_per_sec": 0, 00:22:27.644 "rw_mbytes_per_sec": 0, 00:22:27.644 "r_mbytes_per_sec": 0, 00:22:27.644 "w_mbytes_per_sec": 0 00:22:27.644 }, 00:22:27.644 "claimed": false, 00:22:27.644 "zoned": false, 00:22:27.644 "supported_io_types": { 00:22:27.644 "read": true, 00:22:27.644 "write": true, 00:22:27.644 "unmap": true, 00:22:27.644 "flush": false, 00:22:27.644 "reset": true, 00:22:27.644 "nvme_admin": false, 00:22:27.644 "nvme_io": false, 00:22:27.644 "nvme_io_md": false, 00:22:27.644 "write_zeroes": true, 00:22:27.644 "zcopy": false, 00:22:27.644 "get_zone_info": false, 00:22:27.644 "zone_management": false, 00:22:27.644 "zone_append": false, 00:22:27.644 "compare": false, 00:22:27.644 "compare_and_write": false, 00:22:27.644 "abort": false, 00:22:27.644 "seek_hole": true, 00:22:27.644 "seek_data": true, 00:22:27.644 "copy": false, 00:22:27.644 "nvme_iov_md": false 00:22:27.644 }, 00:22:27.644 "driver_specific": { 00:22:27.644 "lvol": { 00:22:27.644 "lvol_store_uuid": "e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85", 00:22:27.644 "base_bdev": "nvme0n1", 00:22:27.644 "thin_provision": true, 00:22:27.644 "num_allocated_clusters": 0, 00:22:27.644 "snapshot": false, 00:22:27.644 "clone": false, 00:22:27.644 "esnap_clone": false 00:22:27.644 } 00:22:27.644 } 00:22:27.644 } 00:22:27.644 ]' 00:22:27.644 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.644 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.644 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.645 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:27.645 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:27.645 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:27.645 10:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:27.645 10:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:27.906 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abc8d702-73ab-4943-8071-c2b211dbbfda 00:22:28.167 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:28.167 { 00:22:28.167 "name": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:28.167 "aliases": [ 00:22:28.167 "lvs/nvme0n1p0" 00:22:28.167 ], 00:22:28.167 "product_name": "Logical Volume", 00:22:28.167 "block_size": 4096, 00:22:28.167 "num_blocks": 26476544, 00:22:28.167 "uuid": "abc8d702-73ab-4943-8071-c2b211dbbfda", 00:22:28.167 "assigned_rate_limits": { 00:22:28.167 "rw_ios_per_sec": 0, 00:22:28.167 "rw_mbytes_per_sec": 0, 00:22:28.167 "r_mbytes_per_sec": 0, 00:22:28.167 "w_mbytes_per_sec": 0 00:22:28.167 }, 00:22:28.167 "claimed": false, 00:22:28.167 "zoned": false, 00:22:28.167 "supported_io_types": { 00:22:28.167 "read": true, 00:22:28.167 "write": true, 00:22:28.167 "unmap": true, 00:22:28.167 "flush": false, 00:22:28.167 "reset": true, 00:22:28.167 "nvme_admin": false, 00:22:28.167 "nvme_io": false, 00:22:28.167 "nvme_io_md": false, 00:22:28.167 "write_zeroes": true, 00:22:28.167 "zcopy": false, 00:22:28.167 "get_zone_info": false, 00:22:28.167 "zone_management": false, 00:22:28.167 "zone_append": false, 00:22:28.167 "compare": false, 00:22:28.167 "compare_and_write": false, 00:22:28.167 "abort": false, 00:22:28.167 "seek_hole": true, 00:22:28.167 "seek_data": true, 00:22:28.167 "copy": false, 00:22:28.167 "nvme_iov_md": false 00:22:28.167 }, 00:22:28.167 "driver_specific": { 00:22:28.167 "lvol": { 00:22:28.167 "lvol_store_uuid": "e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85", 00:22:28.167 "base_bdev": "nvme0n1", 00:22:28.167 "thin_provision": true, 00:22:28.167 "num_allocated_clusters": 0, 00:22:28.167 "snapshot": false, 00:22:28.167 "clone": false, 00:22:28.167 "esnap_clone": false 00:22:28.167 } 00:22:28.167 } 00:22:28.167 } 00:22:28.167 ]' 00:22:28.167 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d abc8d702-73ab-4943-8071-c2b211dbbfda --l2p_dram_limit 10' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:28.168 10:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d abc8d702-73ab-4943-8071-c2b211dbbfda --l2p_dram_limit 10 -c nvc0n1p0 00:22:28.430 [2024-11-18 10:50:54.066567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.066606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.430 [2024-11-18 10:50:54.066620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:28.430 [2024-11-18 10:50:54.066627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.066673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.066681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.430 [2024-11-18 10:50:54.066689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:28.430 [2024-11-18 10:50:54.066695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.066714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.430 [2024-11-18 10:50:54.067265] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.430 [2024-11-18 10:50:54.067291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.067298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.430 [2024-11-18 10:50:54.067307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:22:28.430 [2024-11-18 10:50:54.067314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.067364] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5fffd43e-f9ab-43fc-b706-00156854ebab 00:22:28.430 [2024-11-18 10:50:54.068279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.068308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:28.430 [2024-11-18 10:50:54.068317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:28.430 [2024-11-18 10:50:54.068324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.072930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.072958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.430 [2024-11-18 10:50:54.072968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:22:28.430 [2024-11-18 10:50:54.072975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.073043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.073052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.430 [2024-11-18 10:50:54.073059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:28.430 [2024-11-18 10:50:54.073068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.073108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.073117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.430 [2024-11-18 10:50:54.073124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:28.430 [2024-11-18 10:50:54.073133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.430 [2024-11-18 10:50:54.073149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.430 [2024-11-18 10:50:54.075981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.430 [2024-11-18 10:50:54.076007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.430 [2024-11-18 10:50:54.076017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:22:28.430 [2024-11-18 10:50:54.076023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.431 [2024-11-18 10:50:54.076048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.431 [2024-11-18 10:50:54.076055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.431 [2024-11-18 10:50:54.076062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:28.431 [2024-11-18 10:50:54.076068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.431 [2024-11-18 10:50:54.076082] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:28.431 [2024-11-18 10:50:54.076187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:28.431 [2024-11-18 10:50:54.076211] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.431 [2024-11-18 10:50:54.076220] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:28.431 [2024-11-18 10:50:54.076230] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076244] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:28.431 [2024-11-18 10:50:54.076250] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.431 [2024-11-18 10:50:54.076258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:28.431 [2024-11-18 10:50:54.076264] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:28.431 [2024-11-18 10:50:54.076271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.431 [2024-11-18 10:50:54.076277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.431 [2024-11-18 10:50:54.076284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:22:28.431 [2024-11-18 10:50:54.076294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.431 [2024-11-18 10:50:54.076360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.431 [2024-11-18 10:50:54.076366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.431 [2024-11-18 10:50:54.076373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:28.431 [2024-11-18 10:50:54.076379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.431 [2024-11-18 10:50:54.076472] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.431 [2024-11-18 10:50:54.076509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.431 [2024-11-18 10:50:54.076517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.431 [2024-11-18 10:50:54.076535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.431 [2024-11-18 10:50:54.076554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.431 [2024-11-18 10:50:54.076565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.431 [2024-11-18 10:50:54.076570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:28.431 [2024-11-18 10:50:54.076577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.431 [2024-11-18 10:50:54.076581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.431 [2024-11-18 10:50:54.076588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:28.431 [2024-11-18 10:50:54.076593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.431 [2024-11-18 10:50:54.076606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.431 [2024-11-18 10:50:54.076624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.431 [2024-11-18 10:50:54.076641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.431 [2024-11-18 10:50:54.076658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.431 [2024-11-18 10:50:54.076676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.431 [2024-11-18 10:50:54.076696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.431 [2024-11-18 10:50:54.076707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.431 [2024-11-18 10:50:54.076712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:28.431 [2024-11-18 10:50:54.076719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.431 [2024-11-18 10:50:54.076724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:28.431 [2024-11-18 10:50:54.076730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:28.431 [2024-11-18 10:50:54.076735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:28.431 [2024-11-18 10:50:54.076745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:28.431 [2024-11-18 10:50:54.076751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076756] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.431 [2024-11-18 10:50:54.076764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.431 [2024-11-18 10:50:54.076769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.431 [2024-11-18 10:50:54.076783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.431 [2024-11-18 10:50:54.076790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.431 [2024-11-18 10:50:54.076795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.431 [2024-11-18 10:50:54.076802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.431 [2024-11-18 10:50:54.076807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.431 [2024-11-18 10:50:54.076813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.431 [2024-11-18 10:50:54.076821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.431 [2024-11-18 10:50:54.076829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:28.431 [2024-11-18 10:50:54.076844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:28.431 [2024-11-18 10:50:54.076849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:28.431 [2024-11-18 10:50:54.076856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:28.431 [2024-11-18 10:50:54.076861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:28.431 [2024-11-18 10:50:54.076868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:28.431 [2024-11-18 10:50:54.076874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:28.431 [2024-11-18 10:50:54.076881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:28.431 [2024-11-18 10:50:54.076886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:28.431 [2024-11-18 10:50:54.076894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:28.431 [2024-11-18 10:50:54.076925] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.431 [2024-11-18 10:50:54.076932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.431 [2024-11-18 10:50:54.076945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.431 [2024-11-18 10:50:54.076950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.431 [2024-11-18 10:50:54.076957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.432 [2024-11-18 10:50:54.076963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.432 [2024-11-18 10:50:54.076972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.432 [2024-11-18 10:50:54.076977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:22:28.432 [2024-11-18 10:50:54.076984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.432 [2024-11-18 10:50:54.077022] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:28.432 [2024-11-18 10:50:54.077033] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:32.641 [2024-11-18 10:50:57.935840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:57.935920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:32.641 [2024-11-18 10:50:57.935939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3858.800 ms 00:22:32.641 [2024-11-18 10:50:57.935951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:57.968715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:57.968781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.641 [2024-11-18 10:50:57.968794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.517 ms 00:22:32.641 [2024-11-18 10:50:57.968806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:57.968955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:57.968970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:32.641 [2024-11-18 10:50:57.968980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:32.641 [2024-11-18 10:50:57.968993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.004454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.004503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.641 [2024-11-18 10:50:58.004515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.406 ms 00:22:32.641 [2024-11-18 10:50:58.004526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.004561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.004577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.641 [2024-11-18 10:50:58.004587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:32.641 [2024-11-18 10:50:58.004596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.005182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.005243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.641 [2024-11-18 10:50:58.005256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:22:32.641 [2024-11-18 10:50:58.005266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.005385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.005397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.641 [2024-11-18 10:50:58.005410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:32.641 [2024-11-18 10:50:58.005423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.022595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.022645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.641 [2024-11-18 10:50:58.022657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.154 ms 00:22:32.641 [2024-11-18 10:50:58.022667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.035666] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:32.641 [2024-11-18 10:50:58.039424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.039468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:32.641 [2024-11-18 10:50:58.039482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.662 ms 00:22:32.641 [2024-11-18 10:50:58.039490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.154532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.154612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:32.641 [2024-11-18 10:50:58.154631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.003 ms 00:22:32.641 [2024-11-18 10:50:58.154641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.154856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.154872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:32.641 [2024-11-18 10:50:58.154887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:22:32.641 [2024-11-18 10:50:58.154896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.180953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.181002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:32.641 [2024-11-18 10:50:58.181018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.998 ms 00:22:32.641 [2024-11-18 10:50:58.181027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.206284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.206335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:32.641 [2024-11-18 10:50:58.206352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.197 ms 00:22:32.641 [2024-11-18 10:50:58.206360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.206976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.206996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.641 [2024-11-18 10:50:58.207009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:22:32.641 [2024-11-18 10:50:58.207018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.292929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.292980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:32.641 [2024-11-18 10:50:58.292999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.859 ms 00:22:32.641 [2024-11-18 10:50:58.293008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.319662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.319709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:32.641 [2024-11-18 10:50:58.319725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.554 ms 00:22:32.641 [2024-11-18 10:50:58.319733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.345279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.345323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:32.641 [2024-11-18 10:50:58.345337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.492 ms 00:22:32.641 [2024-11-18 10:50:58.345345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.371107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.371157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:32.641 [2024-11-18 10:50:58.371171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.709 ms 00:22:32.641 [2024-11-18 10:50:58.371178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.371242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.371253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:32.641 [2024-11-18 10:50:58.371268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:32.641 [2024-11-18 10:50:58.371276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.371368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.641 [2024-11-18 10:50:58.371380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:32.641 [2024-11-18 10:50:58.371394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:32.641 [2024-11-18 10:50:58.371402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.641 [2024-11-18 10:50:58.372541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4305.431 ms, result 0 00:22:32.641 { 00:22:32.641 "name": "ftl0", 00:22:32.641 "uuid": "5fffd43e-f9ab-43fc-b706-00156854ebab" 00:22:32.641 } 00:22:32.641 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:32.641 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:32.903 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:32.903 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:32.903 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:33.165 /dev/nbd0 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:33.165 1+0 records in 00:22:33.165 1+0 records out 00:22:33.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00235054 s, 1.7 MB/s 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:33.165 10:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:33.165 [2024-11-18 10:50:58.943467] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:33.165 [2024-11-18 10:50:58.943615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77703 ] 00:22:33.426 [2024-11-18 10:50:59.110220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.426 [2024-11-18 10:50:59.229142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.812  [2024-11-18T10:51:01.639Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-18T10:51:02.582Z] Copying: 394/1024 [MB] (205 MBps) [2024-11-18T10:51:03.525Z] Copying: 655/1024 [MB] (261 MBps) [2024-11-18T10:51:04.097Z] Copying: 909/1024 [MB] (253 MBps) [2024-11-18T10:51:04.669Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:22:38.785 00:22:38.785 10:51:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:41.329 10:51:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:41.329 [2024-11-18 10:51:06.640230] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:41.329 [2024-11-18 10:51:06.640327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77790 ] 00:22:41.329 [2024-11-18 10:51:06.795221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.329 [2024-11-18 10:51:06.887105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.271  [2024-11-18T10:51:09.539Z] Copying: 35/1024 [MB] (35 MBps) [2024-11-18T10:51:10.177Z] Copying: 59/1024 [MB] (24 MBps) [2024-11-18T10:51:11.142Z] Copying: 70/1024 [MB] (10 MBps) [2024-11-18T10:51:12.518Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-18T10:51:13.452Z] Copying: 96/1024 [MB] (15 MBps) [2024-11-18T10:51:14.387Z] Copying: 111/1024 [MB] (15 MBps) [2024-11-18T10:51:15.335Z] Copying: 127/1024 [MB] (15 MBps) [2024-11-18T10:51:16.273Z] Copying: 145/1024 [MB] (18 MBps) [2024-11-18T10:51:17.208Z] Copying: 170/1024 [MB] (24 MBps) [2024-11-18T10:51:18.142Z] Copying: 194/1024 [MB] (24 MBps) [2024-11-18T10:51:19.518Z] Copying: 211/1024 [MB] (17 MBps) [2024-11-18T10:51:20.452Z] Copying: 226/1024 [MB] (14 MBps) [2024-11-18T10:51:21.387Z] Copying: 242/1024 [MB] (16 MBps) [2024-11-18T10:51:22.323Z] Copying: 263/1024 [MB] (20 MBps) [2024-11-18T10:51:23.259Z] Copying: 282/1024 [MB] (18 MBps) [2024-11-18T10:51:24.194Z] Copying: 299/1024 [MB] (16 MBps) [2024-11-18T10:51:25.129Z] Copying: 315/1024 [MB] (16 MBps) [2024-11-18T10:51:26.504Z] Copying: 333/1024 [MB] (17 MBps) [2024-11-18T10:51:27.439Z] Copying: 352/1024 [MB] (19 MBps) [2024-11-18T10:51:28.374Z] Copying: 366/1024 [MB] (14 MBps) [2024-11-18T10:51:29.314Z] Copying: 378/1024 [MB] (12 MBps) [2024-11-18T10:51:30.251Z] Copying: 397/1024 [MB] (18 MBps) [2024-11-18T10:51:31.186Z] Copying: 416/1024 [MB] (18 MBps) [2024-11-18T10:51:32.120Z] Copying: 436/1024 [MB] (20 MBps) [2024-11-18T10:51:33.495Z] Copying: 450/1024 [MB] (14 MBps) [2024-11-18T10:51:34.429Z] Copying: 465/1024 [MB] (14 MBps) [2024-11-18T10:51:35.365Z] Copying: 500/1024 [MB] (34 MBps) [2024-11-18T10:51:36.299Z] Copying: 525/1024 [MB] (24 MBps) [2024-11-18T10:51:37.234Z] Copying: 543/1024 [MB] (18 MBps) [2024-11-18T10:51:38.169Z] Copying: 562/1024 [MB] (18 MBps) [2024-11-18T10:51:39.114Z] Copying: 578/1024 [MB] (16 MBps) [2024-11-18T10:51:40.179Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-18T10:51:41.115Z] Copying: 622/1024 [MB] (28 MBps) [2024-11-18T10:51:42.492Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-18T10:51:43.427Z] Copying: 659/1024 [MB] (17 MBps) [2024-11-18T10:51:44.364Z] Copying: 679/1024 [MB] (19 MBps) [2024-11-18T10:51:45.299Z] Copying: 695/1024 [MB] (16 MBps) [2024-11-18T10:51:46.235Z] Copying: 711/1024 [MB] (15 MBps) [2024-11-18T10:51:47.169Z] Copying: 728/1024 [MB] (16 MBps) [2024-11-18T10:51:48.104Z] Copying: 745/1024 [MB] (16 MBps) [2024-11-18T10:51:49.479Z] Copying: 763/1024 [MB] (17 MBps) [2024-11-18T10:51:50.413Z] Copying: 782/1024 [MB] (19 MBps) [2024-11-18T10:51:51.347Z] Copying: 799/1024 [MB] (16 MBps) [2024-11-18T10:51:52.281Z] Copying: 815/1024 [MB] (16 MBps) [2024-11-18T10:51:53.217Z] Copying: 832/1024 [MB] (17 MBps) [2024-11-18T10:51:54.152Z] Copying: 849/1024 [MB] (17 MBps) [2024-11-18T10:51:55.528Z] Copying: 864/1024 [MB] (14 MBps) [2024-11-18T10:51:56.462Z] Copying: 880/1024 [MB] (16 MBps) [2024-11-18T10:51:57.397Z] Copying: 894/1024 [MB] (14 MBps) [2024-11-18T10:51:58.332Z] Copying: 911/1024 [MB] (16 MBps) [2024-11-18T10:51:59.268Z] Copying: 928/1024 [MB] (17 MBps) [2024-11-18T10:52:00.202Z] Copying: 947/1024 [MB] (19 MBps) [2024-11-18T10:52:01.136Z] Copying: 977/1024 [MB] (30 MBps) [2024-11-18T10:52:02.520Z] Copying: 989/1024 [MB] (11 MBps) [2024-11-18T10:52:02.778Z] Copying: 1002/1024 [MB] (13 MBps) [2024-11-18T10:52:03.715Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:23:37.831 00:23:37.831 10:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:37.831 10:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:37.831 10:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:38.093 [2024-11-18 10:52:03.761002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.761034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.093 [2024-11-18 10:52:03.761044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.093 [2024-11-18 10:52:03.761052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.761069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.093 [2024-11-18 10:52:03.763127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.763153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.093 [2024-11-18 10:52:03.763165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:23:38.093 [2024-11-18 10:52:03.763171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.765068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.765179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.093 [2024-11-18 10:52:03.765195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:23:38.093 [2024-11-18 10:52:03.765201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.777740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.777834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.093 [2024-11-18 10:52:03.777885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.510 ms 00:23:38.093 [2024-11-18 10:52:03.777904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.782763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.782843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.093 [2024-11-18 10:52:03.782890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:23:38.093 [2024-11-18 10:52:03.782908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.801271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.801462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.093 [2024-11-18 10:52:03.801532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.310 ms 00:23:38.093 [2024-11-18 10:52:03.801550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.813548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.813634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.093 [2024-11-18 10:52:03.813736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:23:38.093 [2024-11-18 10:52:03.813755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.813874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.813912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.093 [2024-11-18 10:52:03.813930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:38.093 [2024-11-18 10:52:03.813981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.831148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.831241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:38.093 [2024-11-18 10:52:03.831283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.140 ms 00:23:38.093 [2024-11-18 10:52:03.831300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.848570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.848651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:38.093 [2024-11-18 10:52:03.848692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:23:38.093 [2024-11-18 10:52:03.848708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.865562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.865641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.093 [2024-11-18 10:52:03.865680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.819 ms 00:23:38.093 [2024-11-18 10:52:03.865696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.882492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.093 [2024-11-18 10:52:03.882565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.093 [2024-11-18 10:52:03.882602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.735 ms 00:23:38.093 [2024-11-18 10:52:03.882619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.093 [2024-11-18 10:52:03.882651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.094 [2024-11-18 10:52:03.882672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.882994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.883984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.884983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.094 [2024-11-18 10:52:03.885371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.095 [2024-11-18 10:52:03.885492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.095 [2024-11-18 10:52:03.885499] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fffd43e-f9ab-43fc-b706-00156854ebab 00:23:38.095 [2024-11-18 10:52:03.885505] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:38.095 [2024-11-18 10:52:03.885513] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:38.095 [2024-11-18 10:52:03.885518] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:38.095 [2024-11-18 10:52:03.885527] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:38.095 [2024-11-18 10:52:03.885532] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.095 [2024-11-18 10:52:03.885541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.095 [2024-11-18 10:52:03.885546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.095 [2024-11-18 10:52:03.885553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.095 [2024-11-18 10:52:03.885557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.095 [2024-11-18 10:52:03.885564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.095 [2024-11-18 10:52:03.885571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.095 [2024-11-18 10:52:03.885578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:23:38.095 [2024-11-18 10:52:03.885583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.895051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.095 [2024-11-18 10:52:03.895076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.095 [2024-11-18 10:52:03.895087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.440 ms 00:23:38.095 [2024-11-18 10:52:03.895092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.895408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.095 [2024-11-18 10:52:03.895417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.095 [2024-11-18 10:52:03.895425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:23:38.095 [2024-11-18 10:52:03.895430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.927976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.095 [2024-11-18 10:52:03.928003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.095 [2024-11-18 10:52:03.928013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.095 [2024-11-18 10:52:03.928018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.928062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.095 [2024-11-18 10:52:03.928068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.095 [2024-11-18 10:52:03.928075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.095 [2024-11-18 10:52:03.928081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.928132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.095 [2024-11-18 10:52:03.928139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.095 [2024-11-18 10:52:03.928148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.095 [2024-11-18 10:52:03.928154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.095 [2024-11-18 10:52:03.928169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.095 [2024-11-18 10:52:03.928175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.095 [2024-11-18 10:52:03.928183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.095 [2024-11-18 10:52:03.928188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.356 [2024-11-18 10:52:03.986469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.356 [2024-11-18 10:52:03.986590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.357 [2024-11-18 10:52:03.986605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:03.986611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.357 [2024-11-18 10:52:04.034263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.357 [2024-11-18 10:52:04.034350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.357 [2024-11-18 10:52:04.034409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.357 [2024-11-18 10:52:04.034501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.357 [2024-11-18 10:52:04.034547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.357 [2024-11-18 10:52:04.034597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.357 [2024-11-18 10:52:04.034648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.357 [2024-11-18 10:52:04.034655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.357 [2024-11-18 10:52:04.034660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.357 [2024-11-18 10:52:04.034760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.730 ms, result 0 00:23:38.357 true 00:23:38.357 10:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77556 00:23:38.357 10:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77556 00:23:38.357 10:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:38.357 [2024-11-18 10:52:04.134621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:38.357 [2024-11-18 10:52:04.134957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78385 ] 00:23:38.618 [2024-11-18 10:52:04.294273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.618 [2024-11-18 10:52:04.369490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.001  [2024-11-18T10:52:06.826Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-18T10:52:07.811Z] Copying: 521/1024 [MB] (261 MBps) [2024-11-18T10:52:08.777Z] Copying: 784/1024 [MB] (263 MBps) [2024-11-18T10:52:09.038Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:23:43.154 00:23:43.416 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77556 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:43.416 10:52:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:43.416 [2024-11-18 10:52:09.104616] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:43.416 [2024-11-18 10:52:09.104888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78438 ] 00:23:43.416 [2024-11-18 10:52:09.260951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.676 [2024-11-18 10:52:09.336018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.676 [2024-11-18 10:52:09.541751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.676 [2024-11-18 10:52:09.541898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.938 [2024-11-18 10:52:09.604475] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:43.938 [2024-11-18 10:52:09.604846] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:43.938 [2024-11-18 10:52:09.605102] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:43.938 [2024-11-18 10:52:09.780902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.938 [2024-11-18 10:52:09.781010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:43.938 [2024-11-18 10:52:09.781059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:43.938 [2024-11-18 10:52:09.781078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.938 [2024-11-18 10:52:09.781130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.938 [2024-11-18 10:52:09.781150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.938 [2024-11-18 10:52:09.781165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:43.938 [2024-11-18 10:52:09.781179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.938 [2024-11-18 10:52:09.781213] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:43.938 [2024-11-18 10:52:09.781769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:43.938 [2024-11-18 10:52:09.781851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.938 [2024-11-18 10:52:09.781900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.938 [2024-11-18 10:52:09.781918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:23:43.938 [2024-11-18 10:52:09.781932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.938 [2024-11-18 10:52:09.782883] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:43.938 [2024-11-18 10:52:09.792342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.938 [2024-11-18 10:52:09.792434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:43.938 [2024-11-18 10:52:09.792497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.461 ms 00:23:43.938 [2024-11-18 10:52:09.792515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.938 [2024-11-18 10:52:09.792559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.938 [2024-11-18 10:52:09.792629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:43.938 [2024-11-18 10:52:09.792645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:43.939 [2024-11-18 10:52:09.792682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.796947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.797026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.939 [2024-11-18 10:52:09.797064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:23:43.939 [2024-11-18 10:52:09.797080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.797143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.797159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.939 [2024-11-18 10:52:09.797174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:43.939 [2024-11-18 10:52:09.797188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.797241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.797263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:43.939 [2024-11-18 10:52:09.797279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:43.939 [2024-11-18 10:52:09.797317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.797397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:43.939 [2024-11-18 10:52:09.799929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.800005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.939 [2024-11-18 10:52:09.800047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:23:43.939 [2024-11-18 10:52:09.800064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.800097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.800118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:43.939 [2024-11-18 10:52:09.800133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:43.939 [2024-11-18 10:52:09.800147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.800200] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:43.939 [2024-11-18 10:52:09.800239] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:43.939 [2024-11-18 10:52:09.800281] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:43.939 [2024-11-18 10:52:09.800309] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:43.939 [2024-11-18 10:52:09.800469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:43.939 [2024-11-18 10:52:09.800496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:43.939 [2024-11-18 10:52:09.800546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:43.939 [2024-11-18 10:52:09.800572] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:43.939 [2024-11-18 10:52:09.800627] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:43.939 [2024-11-18 10:52:09.800651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:43.939 [2024-11-18 10:52:09.800665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:43.939 [2024-11-18 10:52:09.800698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:43.939 [2024-11-18 10:52:09.800715] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:43.939 [2024-11-18 10:52:09.800729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.800744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:43.939 [2024-11-18 10:52:09.800759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:23:43.939 [2024-11-18 10:52:09.800773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.800868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.939 [2024-11-18 10:52:09.800890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:43.939 [2024-11-18 10:52:09.800904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:43.939 [2024-11-18 10:52:09.800918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.939 [2024-11-18 10:52:09.801029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:43.939 [2024-11-18 10:52:09.801049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:43.939 [2024-11-18 10:52:09.801065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:43.939 [2024-11-18 10:52:09.801132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:43.939 [2024-11-18 10:52:09.801174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.939 [2024-11-18 10:52:09.801247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:43.939 [2024-11-18 10:52:09.801267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:43.939 [2024-11-18 10:52:09.801280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.939 [2024-11-18 10:52:09.801294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:43.939 [2024-11-18 10:52:09.801308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:43.939 [2024-11-18 10:52:09.801346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:43.939 [2024-11-18 10:52:09.801382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:43.939 [2024-11-18 10:52:09.801423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:43.939 [2024-11-18 10:52:09.801524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:43.939 [2024-11-18 10:52:09.801588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:43.939 [2024-11-18 10:52:09.801631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:43.939 [2024-11-18 10:52:09.801693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.939 [2024-11-18 10:52:09.801722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:43.939 [2024-11-18 10:52:09.801736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:43.939 [2024-11-18 10:52:09.801750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.939 [2024-11-18 10:52:09.801764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:43.939 [2024-11-18 10:52:09.801777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:43.939 [2024-11-18 10:52:09.801791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:43.939 [2024-11-18 10:52:09.801836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:43.939 [2024-11-18 10:52:09.801850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801863] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:43.939 [2024-11-18 10:52:09.801878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:43.939 [2024-11-18 10:52:09.801892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.939 [2024-11-18 10:52:09.801909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.939 [2024-11-18 10:52:09.801924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:43.939 [2024-11-18 10:52:09.801938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:43.939 [2024-11-18 10:52:09.801968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:43.940 [2024-11-18 10:52:09.802017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:43.940 [2024-11-18 10:52:09.802034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:43.940 [2024-11-18 10:52:09.802076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:43.940 [2024-11-18 10:52:09.802093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:43.940 [2024-11-18 10:52:09.802117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:43.940 [2024-11-18 10:52:09.802180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:43.940 [2024-11-18 10:52:09.802212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:43.940 [2024-11-18 10:52:09.802234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:43.940 [2024-11-18 10:52:09.802257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:43.940 [2024-11-18 10:52:09.802298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:43.940 [2024-11-18 10:52:09.802320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:43.940 [2024-11-18 10:52:09.802342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:43.940 [2024-11-18 10:52:09.802388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:43.940 [2024-11-18 10:52:09.802411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:43.940 [2024-11-18 10:52:09.802559] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:43.940 [2024-11-18 10:52:09.802583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:43.940 [2024-11-18 10:52:09.802627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:43.940 [2024-11-18 10:52:09.802665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:43.940 [2024-11-18 10:52:09.802687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:43.940 [2024-11-18 10:52:09.802737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.940 [2024-11-18 10:52:09.802754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:43.940 [2024-11-18 10:52:09.802769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:23:43.940 [2024-11-18 10:52:09.802783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.823374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.823464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:44.202 [2024-11-18 10:52:09.823506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.532 ms 00:23:44.202 [2024-11-18 10:52:09.823523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.823594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.823614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:44.202 [2024-11-18 10:52:09.823702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:44.202 [2024-11-18 10:52:09.823718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.862108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.862223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:44.202 [2024-11-18 10:52:09.862269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.343 ms 00:23:44.202 [2024-11-18 10:52:09.862292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.862331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.862349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:44.202 [2024-11-18 10:52:09.862364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:44.202 [2024-11-18 10:52:09.862378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.862693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.862724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:44.202 [2024-11-18 10:52:09.862740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:44.202 [2024-11-18 10:52:09.862754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.862865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.202 [2024-11-18 10:52:09.862882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:44.202 [2024-11-18 10:52:09.862935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:44.202 [2024-11-18 10:52:09.862952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.202 [2024-11-18 10:52:09.873460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.873544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:44.203 [2024-11-18 10:52:09.873581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.483 ms 00:23:44.203 [2024-11-18 10:52:09.873598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.883225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:44.203 [2024-11-18 10:52:09.883312] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:44.203 [2024-11-18 10:52:09.883324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.883329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:44.203 [2024-11-18 10:52:09.883336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.637 ms 00:23:44.203 [2024-11-18 10:52:09.883341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.901797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.901826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:44.203 [2024-11-18 10:52:09.901841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.430 ms 00:23:44.203 [2024-11-18 10:52:09.901846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.910859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.910949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:44.203 [2024-11-18 10:52:09.910959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.986 ms 00:23:44.203 [2024-11-18 10:52:09.910965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.919443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.919520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:44.203 [2024-11-18 10:52:09.919558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:23:44.203 [2024-11-18 10:52:09.919574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.920019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.920089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:44.203 [2024-11-18 10:52:09.920128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:23:44.203 [2024-11-18 10:52:09.920144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.963078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.963215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:44.203 [2024-11-18 10:52:09.963257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.910 ms 00:23:44.203 [2024-11-18 10:52:09.963275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.971182] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:44.203 [2024-11-18 10:52:09.972998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.973078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.203 [2024-11-18 10:52:09.973116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.688 ms 00:23:44.203 [2024-11-18 10:52:09.973133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.973198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.973479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:44.203 [2024-11-18 10:52:09.973543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:44.203 [2024-11-18 10:52:09.973563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.973637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.973759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.203 [2024-11-18 10:52:09.973778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:44.203 [2024-11-18 10:52:09.973793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.973832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.973855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.203 [2024-11-18 10:52:09.973904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:44.203 [2024-11-18 10:52:09.973920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.973967] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:44.203 [2024-11-18 10:52:09.973986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.974000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:44.203 [2024-11-18 10:52:09.974016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:44.203 [2024-11-18 10:52:09.974068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.991777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.991869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.203 [2024-11-18 10:52:09.991910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.682 ms 00:23:44.203 [2024-11-18 10:52:09.991928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.991987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.203 [2024-11-18 10:52:09.992008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.203 [2024-11-18 10:52:09.992044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:44.203 [2024-11-18 10:52:09.992061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.203 [2024-11-18 10:52:09.992782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.557 ms, result 0 00:23:45.146  [2024-11-18T10:52:12.419Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-18T10:52:13.362Z] Copying: 35/1024 [MB] (15 MBps) [2024-11-18T10:52:14.306Z] Copying: 52/1024 [MB] (16 MBps) [2024-11-18T10:52:15.251Z] Copying: 95/1024 [MB] (43 MBps) [2024-11-18T10:52:16.195Z] Copying: 111/1024 [MB] (16 MBps) [2024-11-18T10:52:17.139Z] Copying: 127/1024 [MB] (15 MBps) [2024-11-18T10:52:18.083Z] Copying: 149/1024 [MB] (21 MBps) [2024-11-18T10:52:19.028Z] Copying: 166/1024 [MB] (16 MBps) [2024-11-18T10:52:20.424Z] Copying: 186/1024 [MB] (20 MBps) [2024-11-18T10:52:21.369Z] Copying: 205/1024 [MB] (19 MBps) [2024-11-18T10:52:22.312Z] Copying: 224/1024 [MB] (18 MBps) [2024-11-18T10:52:23.253Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-18T10:52:24.194Z] Copying: 256/1024 [MB] (14 MBps) [2024-11-18T10:52:25.134Z] Copying: 273/1024 [MB] (16 MBps) [2024-11-18T10:52:26.075Z] Copying: 295/1024 [MB] (22 MBps) [2024-11-18T10:52:27.017Z] Copying: 312/1024 [MB] (17 MBps) [2024-11-18T10:52:28.399Z] Copying: 330/1024 [MB] (18 MBps) [2024-11-18T10:52:29.341Z] Copying: 347/1024 [MB] (16 MBps) [2024-11-18T10:52:30.279Z] Copying: 363/1024 [MB] (16 MBps) [2024-11-18T10:52:31.219Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-18T10:52:32.159Z] Copying: 392912/1048576 [kB] (9960 kBps) [2024-11-18T10:52:33.100Z] Copying: 396/1024 [MB] (12 MBps) [2024-11-18T10:52:34.042Z] Copying: 411/1024 [MB] (14 MBps) [2024-11-18T10:52:35.035Z] Copying: 425/1024 [MB] (14 MBps) [2024-11-18T10:52:36.439Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-18T10:52:37.375Z] Copying: 449/1024 [MB] (12 MBps) [2024-11-18T10:52:38.365Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-18T10:52:39.304Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-18T10:52:40.242Z] Copying: 487/1024 [MB] (15 MBps) [2024-11-18T10:52:41.178Z] Copying: 512/1024 [MB] (25 MBps) [2024-11-18T10:52:42.115Z] Copying: 530/1024 [MB] (18 MBps) [2024-11-18T10:52:43.053Z] Copying: 545/1024 [MB] (14 MBps) [2024-11-18T10:52:44.434Z] Copying: 559/1024 [MB] (14 MBps) [2024-11-18T10:52:45.374Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-18T10:52:46.313Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-18T10:52:47.253Z] Copying: 607/1024 [MB] (19 MBps) [2024-11-18T10:52:48.192Z] Copying: 621/1024 [MB] (14 MBps) [2024-11-18T10:52:49.130Z] Copying: 642/1024 [MB] (21 MBps) [2024-11-18T10:52:50.068Z] Copying: 656/1024 [MB] (13 MBps) [2024-11-18T10:52:51.008Z] Copying: 672/1024 [MB] (15 MBps) [2024-11-18T10:52:52.393Z] Copying: 687/1024 [MB] (14 MBps) [2024-11-18T10:52:53.336Z] Copying: 700/1024 [MB] (13 MBps) [2024-11-18T10:52:54.279Z] Copying: 712/1024 [MB] (12 MBps) [2024-11-18T10:52:55.222Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-18T10:52:56.167Z] Copying: 751/1024 [MB] (22 MBps) [2024-11-18T10:52:57.109Z] Copying: 762/1024 [MB] (10 MBps) [2024-11-18T10:52:58.052Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-18T10:52:59.435Z] Copying: 783/1024 [MB] (10 MBps) [2024-11-18T10:53:00.377Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-18T10:53:01.320Z] Copying: 805/1024 [MB] (11 MBps) [2024-11-18T10:53:02.263Z] Copying: 817/1024 [MB] (12 MBps) [2024-11-18T10:53:03.206Z] Copying: 847712/1048576 [kB] (10128 kBps) [2024-11-18T10:53:04.187Z] Copying: 838/1024 [MB] (10 MBps) [2024-11-18T10:53:05.155Z] Copying: 855/1024 [MB] (17 MBps) [2024-11-18T10:53:06.099Z] Copying: 880/1024 [MB] (24 MBps) [2024-11-18T10:53:07.042Z] Copying: 893/1024 [MB] (13 MBps) [2024-11-18T10:53:08.435Z] Copying: 912/1024 [MB] (18 MBps) [2024-11-18T10:53:09.007Z] Copying: 926/1024 [MB] (14 MBps) [2024-11-18T10:53:10.390Z] Copying: 944/1024 [MB] (17 MBps) [2024-11-18T10:53:11.333Z] Copying: 960/1024 [MB] (15 MBps) [2024-11-18T10:53:12.276Z] Copying: 977/1024 [MB] (17 MBps) [2024-11-18T10:53:13.221Z] Copying: 998/1024 [MB] (21 MBps) [2024-11-18T10:53:14.166Z] Copying: 1018/1024 [MB] (19 MBps) [2024-11-18T10:53:14.428Z] Copying: 1048344/1048576 [kB] (5652 kBps) [2024-11-18T10:53:14.428Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 10:53:14.228982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.229053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:48.544 [2024-11-18 10:53:14.229070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.544 [2024-11-18 10:53:14.229081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.229199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:48.544 [2024-11-18 10:53:14.232171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.232229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:48.544 [2024-11-18 10:53:14.232242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.932 ms 00:24:48.544 [2024-11-18 10:53:14.232251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.243896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.244072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:48.544 [2024-11-18 10:53:14.244146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.537 ms 00:24:48.544 [2024-11-18 10:53:14.244171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.267338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.267510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:48.544 [2024-11-18 10:53:14.267608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.118 ms 00:24:48.544 [2024-11-18 10:53:14.267636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.273784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.273941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:48.544 [2024-11-18 10:53:14.274010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:24:48.544 [2024-11-18 10:53:14.274033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.300972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.301166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:48.544 [2024-11-18 10:53:14.301522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.862 ms 00:24:48.544 [2024-11-18 10:53:14.301595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.544 [2024-11-18 10:53:14.317148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.544 [2024-11-18 10:53:14.317328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:48.544 [2024-11-18 10:53:14.317390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.499 ms 00:24:48.544 [2024-11-18 10:53:14.317413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.565477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.806 [2024-11-18 10:53:14.565654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:48.806 [2024-11-18 10:53:14.565737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 248.009 ms 00:24:48.806 [2024-11-18 10:53:14.565766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.592080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.806 [2024-11-18 10:53:14.592289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:48.806 [2024-11-18 10:53:14.592396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.281 ms 00:24:48.806 [2024-11-18 10:53:14.592440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.617446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.806 [2024-11-18 10:53:14.617615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:48.806 [2024-11-18 10:53:14.617689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.955 ms 00:24:48.806 [2024-11-18 10:53:14.617700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.642303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.806 [2024-11-18 10:53:14.642352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:48.806 [2024-11-18 10:53:14.642364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.546 ms 00:24:48.806 [2024-11-18 10:53:14.642373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.667181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.806 [2024-11-18 10:53:14.667251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:48.806 [2024-11-18 10:53:14.667264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.738 ms 00:24:48.806 [2024-11-18 10:53:14.667272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.806 [2024-11-18 10:53:14.667317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:48.806 [2024-11-18 10:53:14.667333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:24:48.806 [2024-11-18 10:53:14.667344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:48.806 [2024-11-18 10:53:14.667665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.667994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:48.807 [2024-11-18 10:53:14.668159] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:48.807 [2024-11-18 10:53:14.668167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fffd43e-f9ab-43fc-b706-00156854ebab 00:24:48.807 [2024-11-18 10:53:14.668178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:24:48.807 [2024-11-18 10:53:14.668192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:24:48.807 [2024-11-18 10:53:14.668222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:24:48.807 [2024-11-18 10:53:14.668232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:48.807 [2024-11-18 10:53:14.668240] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:48.807 [2024-11-18 10:53:14.668249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:48.807 [2024-11-18 10:53:14.668257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:48.807 [2024-11-18 10:53:14.668265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:48.807 [2024-11-18 10:53:14.668272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:48.807 [2024-11-18 10:53:14.668279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.807 [2024-11-18 10:53:14.668288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:48.807 [2024-11-18 10:53:14.668297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:24:48.807 [2024-11-18 10:53:14.668305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.807 [2024-11-18 10:53:14.681792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.807 [2024-11-18 10:53:14.681836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:48.807 [2024-11-18 10:53:14.681848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.468 ms 00:24:48.807 [2024-11-18 10:53:14.681857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.807 [2024-11-18 10:53:14.682280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.807 [2024-11-18 10:53:14.682292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:48.807 [2024-11-18 10:53:14.682302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:24:48.807 [2024-11-18 10:53:14.682312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.718862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.718911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.068 [2024-11-18 10:53:14.718925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.718935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.719005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.719014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.068 [2024-11-18 10:53:14.719024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.719034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.719131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.719145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.068 [2024-11-18 10:53:14.719156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.719165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.719181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.719193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.068 [2024-11-18 10:53:14.719227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.719236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.803384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.803437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.068 [2024-11-18 10:53:14.803450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.803459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.872993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.873310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.068 [2024-11-18 10:53:14.873332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.068 [2024-11-18 10:53:14.873342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.068 [2024-11-18 10:53:14.873410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.068 [2024-11-18 10:53:14.873421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.069 [2024-11-18 10:53:14.873432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.873511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.069 [2024-11-18 10:53:14.873522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.069 [2024-11-18 10:53:14.873532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.873646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.069 [2024-11-18 10:53:14.873662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.069 [2024-11-18 10:53:14.873672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.873714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.069 [2024-11-18 10:53:14.873724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.069 [2024-11-18 10:53:14.873734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.873785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.069 [2024-11-18 10:53:14.873797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.069 [2024-11-18 10:53:14.873807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.873860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.069 [2024-11-18 10:53:14.873872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.069 [2024-11-18 10:53:14.873881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.069 [2024-11-18 10:53:14.873890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.069 [2024-11-18 10:53:14.874025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.760 ms, result 0 00:24:50.454 00:24:50.454 00:24:50.454 10:53:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:53.000 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.000 [2024-11-18 10:53:18.370531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:53.000 [2024-11-18 10:53:18.370620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79142 ] 00:24:53.000 [2024-11-18 10:53:18.526555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.000 [2024-11-18 10:53:18.624260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.262 [2024-11-18 10:53:18.910486] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.262 [2024-11-18 10:53:18.910564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.262 [2024-11-18 10:53:19.073500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.262 [2024-11-18 10:53:19.073553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:53.262 [2024-11-18 10:53:19.073574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:53.262 [2024-11-18 10:53:19.073583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.262 [2024-11-18 10:53:19.073636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.262 [2024-11-18 10:53:19.073647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.262 [2024-11-18 10:53:19.073658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:53.262 [2024-11-18 10:53:19.073667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.262 [2024-11-18 10:53:19.073688] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:53.262 [2024-11-18 10:53:19.074461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:53.262 [2024-11-18 10:53:19.074482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.262 [2024-11-18 10:53:19.074490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.262 [2024-11-18 10:53:19.074500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:24:53.262 [2024-11-18 10:53:19.074509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.262 [2024-11-18 10:53:19.076136] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:53.262 [2024-11-18 10:53:19.090359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.262 [2024-11-18 10:53:19.090408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:53.262 [2024-11-18 10:53:19.090424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.225 ms 00:24:53.262 [2024-11-18 10:53:19.090434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.262 [2024-11-18 10:53:19.090515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.090528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:53.263 [2024-11-18 10:53:19.090537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:53.263 [2024-11-18 10:53:19.090545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.098636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.098679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.263 [2024-11-18 10:53:19.098693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.012 ms 00:24:53.263 [2024-11-18 10:53:19.098702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.098788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.098802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.263 [2024-11-18 10:53:19.098810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:53.263 [2024-11-18 10:53:19.098821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.098866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.098879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:53.263 [2024-11-18 10:53:19.098888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:53.263 [2024-11-18 10:53:19.098896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.098922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:53.263 [2024-11-18 10:53:19.102909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.102948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.263 [2024-11-18 10:53:19.102958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:24:53.263 [2024-11-18 10:53:19.102971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.103007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.103016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:53.263 [2024-11-18 10:53:19.103026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:53.263 [2024-11-18 10:53:19.103034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.103085] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:53.263 [2024-11-18 10:53:19.103108] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:53.263 [2024-11-18 10:53:19.103147] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:53.263 [2024-11-18 10:53:19.103168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:53.263 [2024-11-18 10:53:19.103292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:53.263 [2024-11-18 10:53:19.103306] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:53.263 [2024-11-18 10:53:19.103320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:53.263 [2024-11-18 10:53:19.103334] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103344] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103354] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:53.263 [2024-11-18 10:53:19.103362] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:53.263 [2024-11-18 10:53:19.103373] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:53.263 [2024-11-18 10:53:19.103380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:53.263 [2024-11-18 10:53:19.103391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.103401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:53.263 [2024-11-18 10:53:19.103410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:53.263 [2024-11-18 10:53:19.103417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.103503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.263 [2024-11-18 10:53:19.103514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:53.263 [2024-11-18 10:53:19.103523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:53.263 [2024-11-18 10:53:19.103530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.263 [2024-11-18 10:53:19.103635] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:53.263 [2024-11-18 10:53:19.103650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:53.263 [2024-11-18 10:53:19.103661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:53.263 [2024-11-18 10:53:19.103686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:53.263 [2024-11-18 10:53:19.103707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.263 [2024-11-18 10:53:19.103723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:53.263 [2024-11-18 10:53:19.103730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:53.263 [2024-11-18 10:53:19.103739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.263 [2024-11-18 10:53:19.103747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:53.263 [2024-11-18 10:53:19.103754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:53.263 [2024-11-18 10:53:19.103771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:53.263 [2024-11-18 10:53:19.103786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:53.263 [2024-11-18 10:53:19.103808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:53.263 [2024-11-18 10:53:19.103832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:53.263 [2024-11-18 10:53:19.103855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:53.263 [2024-11-18 10:53:19.103877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.263 [2024-11-18 10:53:19.103890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:53.263 [2024-11-18 10:53:19.103899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.263 [2024-11-18 10:53:19.103914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:53.263 [2024-11-18 10:53:19.103921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:53.263 [2024-11-18 10:53:19.103931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.263 [2024-11-18 10:53:19.103939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:53.263 [2024-11-18 10:53:19.103946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:53.263 [2024-11-18 10:53:19.103953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:53.263 [2024-11-18 10:53:19.103969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:53.263 [2024-11-18 10:53:19.103976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.103983] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:53.263 [2024-11-18 10:53:19.103992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:53.263 [2024-11-18 10:53:19.104000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.263 [2024-11-18 10:53:19.104009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.263 [2024-11-18 10:53:19.104018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:53.263 [2024-11-18 10:53:19.104025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:53.263 [2024-11-18 10:53:19.104031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:53.263 [2024-11-18 10:53:19.104038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:53.263 [2024-11-18 10:53:19.104046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:53.263 [2024-11-18 10:53:19.104052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:53.263 [2024-11-18 10:53:19.104060] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:53.263 [2024-11-18 10:53:19.104071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.263 [2024-11-18 10:53:19.104079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:53.263 [2024-11-18 10:53:19.104087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:53.263 [2024-11-18 10:53:19.104096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:53.264 [2024-11-18 10:53:19.104104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:53.264 [2024-11-18 10:53:19.104112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:53.264 [2024-11-18 10:53:19.104119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:53.264 [2024-11-18 10:53:19.104128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:53.264 [2024-11-18 10:53:19.104136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:53.264 [2024-11-18 10:53:19.104143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:53.264 [2024-11-18 10:53:19.104152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:53.264 [2024-11-18 10:53:19.104194] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:53.264 [2024-11-18 10:53:19.104226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:53.264 [2024-11-18 10:53:19.104245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:53.264 [2024-11-18 10:53:19.104253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:53.264 [2024-11-18 10:53:19.104261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:53.264 [2024-11-18 10:53:19.104270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.264 [2024-11-18 10:53:19.104278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:53.264 [2024-11-18 10:53:19.104287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:24:53.264 [2024-11-18 10:53:19.104296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.264 [2024-11-18 10:53:19.136343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.264 [2024-11-18 10:53:19.136389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.264 [2024-11-18 10:53:19.136401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.003 ms 00:24:53.264 [2024-11-18 10:53:19.136425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.264 [2024-11-18 10:53:19.136513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.264 [2024-11-18 10:53:19.136523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:53.264 [2024-11-18 10:53:19.136532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:53.264 [2024-11-18 10:53:19.136540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.185360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.185412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.524 [2024-11-18 10:53:19.185427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.760 ms 00:24:53.524 [2024-11-18 10:53:19.185437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.185485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.185497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.524 [2024-11-18 10:53:19.185508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:53.524 [2024-11-18 10:53:19.185520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.186079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.186115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.524 [2024-11-18 10:53:19.186128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:24:53.524 [2024-11-18 10:53:19.186138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.186313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.186335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.524 [2024-11-18 10:53:19.186345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:53.524 [2024-11-18 10:53:19.186360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.202260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.202417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.524 [2024-11-18 10:53:19.202485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.876 ms 00:24:53.524 [2024-11-18 10:53:19.202508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.216703] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:53.524 [2024-11-18 10:53:19.216879] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:53.524 [2024-11-18 10:53:19.216947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.216970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:53.524 [2024-11-18 10:53:19.216992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.318 ms 00:24:53.524 [2024-11-18 10:53:19.217013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.242441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.242608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:53.524 [2024-11-18 10:53:19.242669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.377 ms 00:24:53.524 [2024-11-18 10:53:19.242693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.255219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.255382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:53.524 [2024-11-18 10:53:19.255438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.471 ms 00:24:53.524 [2024-11-18 10:53:19.255461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.268113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.268282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:53.524 [2024-11-18 10:53:19.268340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.602 ms 00:24:53.524 [2024-11-18 10:53:19.268363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.269342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.269523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:53.524 [2024-11-18 10:53:19.269589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:24:53.524 [2024-11-18 10:53:19.269621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.334255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.334484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:53.524 [2024-11-18 10:53:19.334559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.590 ms 00:24:53.524 [2024-11-18 10:53:19.334584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.345768] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:53.524 [2024-11-18 10:53:19.349010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.349153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:53.524 [2024-11-18 10:53:19.349221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.372 ms 00:24:53.524 [2024-11-18 10:53:19.349247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.524 [2024-11-18 10:53:19.349346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.524 [2024-11-18 10:53:19.349376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:53.524 [2024-11-18 10:53:19.349399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:53.525 [2024-11-18 10:53:19.349422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.351085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.525 [2024-11-18 10:53:19.351256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:53.525 [2024-11-18 10:53:19.351275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:24:53.525 [2024-11-18 10:53:19.351285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.351322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.525 [2024-11-18 10:53:19.351332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:53.525 [2024-11-18 10:53:19.351342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:53.525 [2024-11-18 10:53:19.351350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.351391] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:53.525 [2024-11-18 10:53:19.351406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.525 [2024-11-18 10:53:19.351416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:53.525 [2024-11-18 10:53:19.351427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:53.525 [2024-11-18 10:53:19.351435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.376857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.525 [2024-11-18 10:53:19.376907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:53.525 [2024-11-18 10:53:19.376920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.401 ms 00:24:53.525 [2024-11-18 10:53:19.376935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.377021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.525 [2024-11-18 10:53:19.377032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:53.525 [2024-11-18 10:53:19.377041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:53.525 [2024-11-18 10:53:19.377050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.525 [2024-11-18 10:53:19.378307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.299 ms, result 0 00:24:54.910  [2024-11-18T10:53:21.736Z] Copying: 1036/1048576 [kB] (1036 kBps) [2024-11-18T10:53:22.678Z] Copying: 4380/1048576 [kB] (3344 kBps) [2024-11-18T10:53:23.622Z] Copying: 37/1024 [MB] (33 MBps) [2024-11-18T10:53:24.566Z] Copying: 66/1024 [MB] (28 MBps) [2024-11-18T10:53:25.952Z] Copying: 98/1024 [MB] (31 MBps) [2024-11-18T10:53:26.894Z] Copying: 126/1024 [MB] (28 MBps) [2024-11-18T10:53:27.837Z] Copying: 152/1024 [MB] (25 MBps) [2024-11-18T10:53:28.780Z] Copying: 175/1024 [MB] (23 MBps) [2024-11-18T10:53:29.722Z] Copying: 200/1024 [MB] (24 MBps) [2024-11-18T10:53:30.665Z] Copying: 229/1024 [MB] (28 MBps) [2024-11-18T10:53:31.610Z] Copying: 252/1024 [MB] (22 MBps) [2024-11-18T10:53:32.993Z] Copying: 279/1024 [MB] (27 MBps) [2024-11-18T10:53:33.628Z] Copying: 305/1024 [MB] (25 MBps) [2024-11-18T10:53:34.571Z] Copying: 324/1024 [MB] (18 MBps) [2024-11-18T10:53:35.961Z] Copying: 348/1024 [MB] (24 MBps) [2024-11-18T10:53:36.904Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-18T10:53:37.848Z] Copying: 402/1024 [MB] (38 MBps) [2024-11-18T10:53:38.791Z] Copying: 418/1024 [MB] (15 MBps) [2024-11-18T10:53:39.733Z] Copying: 436/1024 [MB] (18 MBps) [2024-11-18T10:53:40.675Z] Copying: 452/1024 [MB] (15 MBps) [2024-11-18T10:53:41.620Z] Copying: 480/1024 [MB] (27 MBps) [2024-11-18T10:53:42.562Z] Copying: 510/1024 [MB] (29 MBps) [2024-11-18T10:53:43.947Z] Copying: 547/1024 [MB] (37 MBps) [2024-11-18T10:53:44.890Z] Copying: 575/1024 [MB] (28 MBps) [2024-11-18T10:53:45.834Z] Copying: 608/1024 [MB] (32 MBps) [2024-11-18T10:53:46.777Z] Copying: 638/1024 [MB] (30 MBps) [2024-11-18T10:53:47.721Z] Copying: 668/1024 [MB] (29 MBps) [2024-11-18T10:53:48.663Z] Copying: 699/1024 [MB] (31 MBps) [2024-11-18T10:53:49.604Z] Copying: 739/1024 [MB] (40 MBps) [2024-11-18T10:53:50.989Z] Copying: 767/1024 [MB] (27 MBps) [2024-11-18T10:53:51.561Z] Copying: 799/1024 [MB] (31 MBps) [2024-11-18T10:53:52.947Z] Copying: 825/1024 [MB] (26 MBps) [2024-11-18T10:53:53.890Z] Copying: 855/1024 [MB] (30 MBps) [2024-11-18T10:53:54.834Z] Copying: 885/1024 [MB] (29 MBps) [2024-11-18T10:53:55.778Z] Copying: 919/1024 [MB] (34 MBps) [2024-11-18T10:53:56.722Z] Copying: 951/1024 [MB] (31 MBps) [2024-11-18T10:53:57.665Z] Copying: 978/1024 [MB] (27 MBps) [2024-11-18T10:53:57.665Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-18 10:53:57.646188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.781 [2024-11-18 10:53:57.646256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:31.781 [2024-11-18 10:53:57.646278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:31.781 [2024-11-18 10:53:57.646287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.781 [2024-11-18 10:53:57.646312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:31.781 [2024-11-18 10:53:57.649422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.781 [2024-11-18 10:53:57.649455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:31.781 [2024-11-18 10:53:57.649466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:25:31.781 [2024-11-18 10:53:57.649475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.781 [2024-11-18 10:53:57.649973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.781 [2024-11-18 10:53:57.650067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:31.781 [2024-11-18 10:53:57.650135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:25:31.781 [2024-11-18 10:53:57.650218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.781 [2024-11-18 10:53:57.661951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.781 [2024-11-18 10:53:57.662000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:31.781 [2024-11-18 10:53:57.662011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.694 ms 00:25:31.781 [2024-11-18 10:53:57.662019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.668339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.668458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:32.043 [2024-11-18 10:53:57.668475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.292 ms 00:25:32.043 [2024-11-18 10:53:57.668489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.693049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.693082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.043 [2024-11-18 10:53:57.693093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.510 ms 00:25:32.043 [2024-11-18 10:53:57.693100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.707677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.707803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.043 [2024-11-18 10:53:57.707820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.544 ms 00:25:32.043 [2024-11-18 10:53:57.707829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.712059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.712092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:32.043 [2024-11-18 10:53:57.712103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.140 ms 00:25:32.043 [2024-11-18 10:53:57.712110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.736024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.736056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:32.043 [2024-11-18 10:53:57.736066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:25:32.043 [2024-11-18 10:53:57.736072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.759336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.759377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:32.043 [2024-11-18 10:53:57.759395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.233 ms 00:25:32.043 [2024-11-18 10:53:57.759402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.782544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.782659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:32.043 [2024-11-18 10:53:57.782674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.110 ms 00:25:32.043 [2024-11-18 10:53:57.782682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.805635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.043 [2024-11-18 10:53:57.805746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:32.043 [2024-11-18 10:53:57.805761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:25:32.043 [2024-11-18 10:53:57.805768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.043 [2024-11-18 10:53:57.805795] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.043 [2024-11-18 10:53:57.805808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:32.043 [2024-11-18 10:53:57.805818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:32.043 [2024-11-18 10:53:57.805827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.805998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.043 [2024-11-18 10:53:57.806006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.044 [2024-11-18 10:53:57.806599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.044 [2024-11-18 10:53:57.806606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fffd43e-f9ab-43fc-b706-00156854ebab 00:25:32.044 [2024-11-18 10:53:57.806614] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:32.044 [2024-11-18 10:53:57.806625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161472 00:25:32.044 [2024-11-18 10:53:57.806632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159488 00:25:32.044 [2024-11-18 10:53:57.806643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:25:32.044 [2024-11-18 10:53:57.806650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.044 [2024-11-18 10:53:57.806657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.044 [2024-11-18 10:53:57.806664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.044 [2024-11-18 10:53:57.806676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.044 [2024-11-18 10:53:57.806683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.044 [2024-11-18 10:53:57.806690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.044 [2024-11-18 10:53:57.806697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.044 [2024-11-18 10:53:57.806705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:25:32.044 [2024-11-18 10:53:57.806713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.044 [2024-11-18 10:53:57.819322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.044 [2024-11-18 10:53:57.819355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:32.044 [2024-11-18 10:53:57.819364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.594 ms 00:25:32.045 [2024-11-18 10:53:57.819372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.045 [2024-11-18 10:53:57.819726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.045 [2024-11-18 10:53:57.819735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:32.045 [2024-11-18 10:53:57.819744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:25:32.045 [2024-11-18 10:53:57.819751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.045 [2024-11-18 10:53:57.853285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.045 [2024-11-18 10:53:57.853320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.045 [2024-11-18 10:53:57.853330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.045 [2024-11-18 10:53:57.853337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.045 [2024-11-18 10:53:57.853387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.045 [2024-11-18 10:53:57.853394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.045 [2024-11-18 10:53:57.853402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.045 [2024-11-18 10:53:57.853410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.045 [2024-11-18 10:53:57.853481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.045 [2024-11-18 10:53:57.853493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.045 [2024-11-18 10:53:57.853501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.045 [2024-11-18 10:53:57.853508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.045 [2024-11-18 10:53:57.853523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.045 [2024-11-18 10:53:57.853531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.045 [2024-11-18 10:53:57.853539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.045 [2024-11-18 10:53:57.853545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:57.932946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:57.933119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.307 [2024-11-18 10:53:57.933137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:57.933144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:57.999942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.307 [2024-11-18 10:53:58.000163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.307 [2024-11-18 10:53:58.000286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.307 [2024-11-18 10:53:58.000373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.307 [2024-11-18 10:53:58.000529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.307 [2024-11-18 10:53:58.000590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.307 [2024-11-18 10:53:58.000657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.307 [2024-11-18 10:53:58.000722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.307 [2024-11-18 10:53:58.000731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.307 [2024-11-18 10:53:58.000738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.307 [2024-11-18 10:53:58.000867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.641 ms, result 0 00:25:32.880 00:25:32.880 00:25:32.880 10:53:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.434 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.434 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:35.434 [2024-11-18 10:54:01.052957] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:35.434 [2024-11-18 10:54:01.053342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79581 ] 00:25:35.434 [2024-11-18 10:54:01.219174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.695 [2024-11-18 10:54:01.338618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.957 [2024-11-18 10:54:01.626856] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.957 [2024-11-18 10:54:01.626933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.957 [2024-11-18 10:54:01.789449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.789663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.957 [2024-11-18 10:54:01.789696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.957 [2024-11-18 10:54:01.789705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.789769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.789780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.957 [2024-11-18 10:54:01.789793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:35.957 [2024-11-18 10:54:01.789801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.789822] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.957 [2024-11-18 10:54:01.790545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.957 [2024-11-18 10:54:01.790565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.790573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.957 [2024-11-18 10:54:01.790583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:25:35.957 [2024-11-18 10:54:01.790591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.792266] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.957 [2024-11-18 10:54:01.806341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.806387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.957 [2024-11-18 10:54:01.806401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.077 ms 00:25:35.957 [2024-11-18 10:54:01.806410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.806487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.806497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.957 [2024-11-18 10:54:01.806506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:35.957 [2024-11-18 10:54:01.806514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.814409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.814448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.957 [2024-11-18 10:54:01.814458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.820 ms 00:25:35.957 [2024-11-18 10:54:01.814467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.814550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.814559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.957 [2024-11-18 10:54:01.814567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:35.957 [2024-11-18 10:54:01.814575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.814615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.814625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.957 [2024-11-18 10:54:01.814634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:35.957 [2024-11-18 10:54:01.814642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.814664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.957 [2024-11-18 10:54:01.818782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.818963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.957 [2024-11-18 10:54:01.818984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:25:35.957 [2024-11-18 10:54:01.818997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.819033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.819041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.957 [2024-11-18 10:54:01.819056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:35.957 [2024-11-18 10:54:01.819064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.819115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.957 [2024-11-18 10:54:01.819139] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.957 [2024-11-18 10:54:01.819175] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.957 [2024-11-18 10:54:01.819194] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.957 [2024-11-18 10:54:01.819315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.957 [2024-11-18 10:54:01.819327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.957 [2024-11-18 10:54:01.819339] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.957 [2024-11-18 10:54:01.819350] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.957 [2024-11-18 10:54:01.819358] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.957 [2024-11-18 10:54:01.819367] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.957 [2024-11-18 10:54:01.819375] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.957 [2024-11-18 10:54:01.819383] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.957 [2024-11-18 10:54:01.819390] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.957 [2024-11-18 10:54:01.819402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.819409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.957 [2024-11-18 10:54:01.819417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:35.957 [2024-11-18 10:54:01.819425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.819507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.957 [2024-11-18 10:54:01.819515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.957 [2024-11-18 10:54:01.819523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:35.957 [2024-11-18 10:54:01.819532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.957 [2024-11-18 10:54:01.819634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.958 [2024-11-18 10:54:01.819650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.958 [2024-11-18 10:54:01.819659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.958 [2024-11-18 10:54:01.819682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.958 [2024-11-18 10:54:01.819705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.958 [2024-11-18 10:54:01.819719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.958 [2024-11-18 10:54:01.819726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.958 [2024-11-18 10:54:01.819737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.958 [2024-11-18 10:54:01.819744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.958 [2024-11-18 10:54:01.819751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.958 [2024-11-18 10:54:01.819764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.958 [2024-11-18 10:54:01.819778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.958 [2024-11-18 10:54:01.819800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.958 [2024-11-18 10:54:01.819822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.958 [2024-11-18 10:54:01.819841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.958 [2024-11-18 10:54:01.819861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.958 [2024-11-18 10:54:01.819882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.958 [2024-11-18 10:54:01.819895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.958 [2024-11-18 10:54:01.819902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.958 [2024-11-18 10:54:01.819908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.958 [2024-11-18 10:54:01.819915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.958 [2024-11-18 10:54:01.819922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.958 [2024-11-18 10:54:01.819928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.958 [2024-11-18 10:54:01.819940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.958 [2024-11-18 10:54:01.819946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819954] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.958 [2024-11-18 10:54:01.819963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.958 [2024-11-18 10:54:01.819971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.958 [2024-11-18 10:54:01.819979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.958 [2024-11-18 10:54:01.819986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.958 [2024-11-18 10:54:01.819994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.958 [2024-11-18 10:54:01.820000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.958 [2024-11-18 10:54:01.820008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.958 [2024-11-18 10:54:01.820015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.958 [2024-11-18 10:54:01.820022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.958 [2024-11-18 10:54:01.820030] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.958 [2024-11-18 10:54:01.820040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.958 [2024-11-18 10:54:01.820056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.958 [2024-11-18 10:54:01.820063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.958 [2024-11-18 10:54:01.820070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.958 [2024-11-18 10:54:01.820077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.958 [2024-11-18 10:54:01.820084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.958 [2024-11-18 10:54:01.820091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.958 [2024-11-18 10:54:01.820098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.958 [2024-11-18 10:54:01.820105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.958 [2024-11-18 10:54:01.820112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.958 [2024-11-18 10:54:01.820147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.958 [2024-11-18 10:54:01.820159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.958 [2024-11-18 10:54:01.820175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.958 [2024-11-18 10:54:01.820182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.958 [2024-11-18 10:54:01.820188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.958 [2024-11-18 10:54:01.820196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.958 [2024-11-18 10:54:01.820220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.958 [2024-11-18 10:54:01.820228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:25:35.958 [2024-11-18 10:54:01.820236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.297 [2024-11-18 10:54:01.852314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.297 [2024-11-18 10:54:01.852502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.297 [2024-11-18 10:54:01.852566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.030 ms 00:25:36.297 [2024-11-18 10:54:01.852590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.297 [2024-11-18 10:54:01.852704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.297 [2024-11-18 10:54:01.852726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.297 [2024-11-18 10:54:01.852746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:36.297 [2024-11-18 10:54:01.852765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.297 [2024-11-18 10:54:01.899110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.297 [2024-11-18 10:54:01.899313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.297 [2024-11-18 10:54:01.899384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.272 ms 00:25:36.297 [2024-11-18 10:54:01.899410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.297 [2024-11-18 10:54:01.899475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.297 [2024-11-18 10:54:01.899500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.297 [2024-11-18 10:54:01.899530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.297 [2024-11-18 10:54:01.899554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.297 [2024-11-18 10:54:01.900111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.297 [2024-11-18 10:54:01.900256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.298 [2024-11-18 10:54:01.900315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:25:36.298 [2024-11-18 10:54:01.900338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.900900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.901017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.298 [2024-11-18 10:54:01.901072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:36.298 [2024-11-18 10:54:01.901105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.917022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.917183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.298 [2024-11-18 10:54:01.917274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:25:36.298 [2024-11-18 10:54:01.917298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.931622] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.298 [2024-11-18 10:54:01.931798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.298 [2024-11-18 10:54:01.931865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.931886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.298 [2024-11-18 10:54:01.931907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.440 ms 00:25:36.298 [2024-11-18 10:54:01.931926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.957722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.957892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.298 [2024-11-18 10:54:01.957952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:25:36.298 [2024-11-18 10:54:01.957976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.970651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.970803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.298 [2024-11-18 10:54:01.970858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:25:36.298 [2024-11-18 10:54:01.970879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.983166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.983335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.298 [2024-11-18 10:54:01.983396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.241 ms 00:25:36.298 [2024-11-18 10:54:01.983418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:01.984152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:01.984319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.298 [2024-11-18 10:54:01.984385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:25:36.298 [2024-11-18 10:54:01.984426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.048772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.048946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.298 [2024-11-18 10:54:02.049019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.306 ms 00:25:36.298 [2024-11-18 10:54:02.049044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.060430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.298 [2024-11-18 10:54:02.063614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.063748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.298 [2024-11-18 10:54:02.063803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.512 ms 00:25:36.298 [2024-11-18 10:54:02.063827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.063931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.063960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.298 [2024-11-18 10:54:02.063982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:36.298 [2024-11-18 10:54:02.064005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.064841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.064990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.298 [2024-11-18 10:54:02.065049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:25:36.298 [2024-11-18 10:54:02.065074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.065127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.065155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.298 [2024-11-18 10:54:02.065180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:36.298 [2024-11-18 10:54:02.065201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.065270] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.298 [2024-11-18 10:54:02.065298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.065319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.298 [2024-11-18 10:54:02.065390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:36.298 [2024-11-18 10:54:02.065402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.091112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.091159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.298 [2024-11-18 10:54:02.091171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:25:36.298 [2024-11-18 10:54:02.091185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.091293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.298 [2024-11-18 10:54:02.091305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.298 [2024-11-18 10:54:02.091315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:36.298 [2024-11-18 10:54:02.091323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.298 [2024-11-18 10:54:02.092563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.585 ms, result 0 00:25:37.697  [2024-11-18T10:54:04.525Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-18T10:54:05.467Z] Copying: 29/1024 [MB] (12 MBps) [2024-11-18T10:54:06.410Z] Copying: 49/1024 [MB] (19 MBps) [2024-11-18T10:54:07.353Z] Copying: 66/1024 [MB] (17 MBps) [2024-11-18T10:54:08.295Z] Copying: 84/1024 [MB] (18 MBps) [2024-11-18T10:54:09.682Z] Copying: 101/1024 [MB] (16 MBps) [2024-11-18T10:54:10.625Z] Copying: 118/1024 [MB] (17 MBps) [2024-11-18T10:54:11.570Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-18T10:54:12.514Z] Copying: 155/1024 [MB] (17 MBps) [2024-11-18T10:54:13.459Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-18T10:54:14.401Z] Copying: 176/1024 [MB] (10 MBps) [2024-11-18T10:54:15.346Z] Copying: 190/1024 [MB] (13 MBps) [2024-11-18T10:54:16.291Z] Copying: 209/1024 [MB] (18 MBps) [2024-11-18T10:54:17.679Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-18T10:54:18.623Z] Copying: 239/1024 [MB] (19 MBps) [2024-11-18T10:54:19.569Z] Copying: 253/1024 [MB] (14 MBps) [2024-11-18T10:54:20.511Z] Copying: 264/1024 [MB] (10 MBps) [2024-11-18T10:54:21.454Z] Copying: 281/1024 [MB] (17 MBps) [2024-11-18T10:54:22.399Z] Copying: 299/1024 [MB] (17 MBps) [2024-11-18T10:54:23.343Z] Copying: 325/1024 [MB] (26 MBps) [2024-11-18T10:54:24.288Z] Copying: 345/1024 [MB] (19 MBps) [2024-11-18T10:54:25.673Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-18T10:54:26.616Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-18T10:54:27.560Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-18T10:54:28.504Z] Copying: 392/1024 [MB] (15 MBps) [2024-11-18T10:54:29.446Z] Copying: 405/1024 [MB] (12 MBps) [2024-11-18T10:54:30.389Z] Copying: 415/1024 [MB] (10 MBps) [2024-11-18T10:54:31.378Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-18T10:54:32.344Z] Copying: 448/1024 [MB] (21 MBps) [2024-11-18T10:54:33.288Z] Copying: 464/1024 [MB] (16 MBps) [2024-11-18T10:54:34.674Z] Copying: 485/1024 [MB] (21 MBps) [2024-11-18T10:54:35.619Z] Copying: 503/1024 [MB] (18 MBps) [2024-11-18T10:54:36.564Z] Copying: 523/1024 [MB] (20 MBps) [2024-11-18T10:54:37.507Z] Copying: 542/1024 [MB] (18 MBps) [2024-11-18T10:54:38.452Z] Copying: 556/1024 [MB] (13 MBps) [2024-11-18T10:54:39.406Z] Copying: 566/1024 [MB] (10 MBps) [2024-11-18T10:54:40.352Z] Copying: 577/1024 [MB] (10 MBps) [2024-11-18T10:54:41.292Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-18T10:54:42.677Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-18T10:54:43.641Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-18T10:54:44.585Z] Copying: 619/1024 [MB] (10 MBps) [2024-11-18T10:54:45.529Z] Copying: 630/1024 [MB] (10 MBps) [2024-11-18T10:54:46.474Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-18T10:54:47.419Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-18T10:54:48.364Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-18T10:54:49.309Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-18T10:54:50.696Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-18T10:54:51.641Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-18T10:54:52.586Z] Copying: 721696/1048576 [kB] (10096 kBps) [2024-11-18T10:54:53.531Z] Copying: 726/1024 [MB] (21 MBps) [2024-11-18T10:54:54.475Z] Copying: 754/1024 [MB] (28 MBps) [2024-11-18T10:54:55.419Z] Copying: 774/1024 [MB] (20 MBps) [2024-11-18T10:54:56.364Z] Copying: 805/1024 [MB] (30 MBps) [2024-11-18T10:54:57.304Z] Copying: 826/1024 [MB] (21 MBps) [2024-11-18T10:54:58.691Z] Copying: 848/1024 [MB] (21 MBps) [2024-11-18T10:54:59.636Z] Copying: 864/1024 [MB] (16 MBps) [2024-11-18T10:55:00.646Z] Copying: 879/1024 [MB] (14 MBps) [2024-11-18T10:55:01.591Z] Copying: 898/1024 [MB] (19 MBps) [2024-11-18T10:55:02.537Z] Copying: 919/1024 [MB] (20 MBps) [2024-11-18T10:55:03.482Z] Copying: 932/1024 [MB] (13 MBps) [2024-11-18T10:55:04.425Z] Copying: 951/1024 [MB] (18 MBps) [2024-11-18T10:55:05.369Z] Copying: 972/1024 [MB] (20 MBps) [2024-11-18T10:55:06.313Z] Copying: 998/1024 [MB] (26 MBps) [2024-11-18T10:55:06.889Z] Copying: 1018/1024 [MB] (19 MBps) [2024-11-18T10:55:06.889Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 10:55:06.778994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.779071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:41.005 [2024-11-18 10:55:06.779088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:41.005 [2024-11-18 10:55:06.779097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.779121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:41.005 [2024-11-18 10:55:06.782236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.782275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:41.005 [2024-11-18 10:55:06.782295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:26:41.005 [2024-11-18 10:55:06.782304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.782546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.782557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:41.005 [2024-11-18 10:55:06.782567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:26:41.005 [2024-11-18 10:55:06.782575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.786550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.786670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:41.005 [2024-11-18 10:55:06.786731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:26:41.005 [2024-11-18 10:55:06.786756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.793031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.793192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:41.005 [2024-11-18 10:55:06.793327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:26:41.005 [2024-11-18 10:55:06.793354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.821036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.821232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:41.005 [2024-11-18 10:55:06.821358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.604 ms 00:26:41.005 [2024-11-18 10:55:06.821385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.837703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.837881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:41.005 [2024-11-18 10:55:06.837954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.265 ms 00:26:41.005 [2024-11-18 10:55:06.837979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.842483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.842645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:41.005 [2024-11-18 10:55:06.842701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.448 ms 00:26:41.005 [2024-11-18 10:55:06.842723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.005 [2024-11-18 10:55:06.869340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.005 [2024-11-18 10:55:06.869499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:41.005 [2024-11-18 10:55:06.869557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.584 ms 00:26:41.005 [2024-11-18 10:55:06.869581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.269 [2024-11-18 10:55:06.895545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.269 [2024-11-18 10:55:06.895722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:41.269 [2024-11-18 10:55:06.895779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.914 ms 00:26:41.269 [2024-11-18 10:55:06.895800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.269 [2024-11-18 10:55:06.921124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.269 [2024-11-18 10:55:06.921333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:41.269 [2024-11-18 10:55:06.921398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.237 ms 00:26:41.269 [2024-11-18 10:55:06.921421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.269 [2024-11-18 10:55:06.947130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.269 [2024-11-18 10:55:06.947323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:41.269 [2024-11-18 10:55:06.947386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.615 ms 00:26:41.269 [2024-11-18 10:55:06.947407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.269 [2024-11-18 10:55:06.947454] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:41.269 [2024-11-18 10:55:06.947487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:41.269 [2024-11-18 10:55:06.947519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:41.269 [2024-11-18 10:55:06.947528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:41.269 [2024-11-18 10:55:06.947860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.947994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:41.270 [2024-11-18 10:55:06.948283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:41.270 [2024-11-18 10:55:06.948295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fffd43e-f9ab-43fc-b706-00156854ebab 00:26:41.270 [2024-11-18 10:55:06.948303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:41.270 [2024-11-18 10:55:06.948310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:41.270 [2024-11-18 10:55:06.948318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:41.270 [2024-11-18 10:55:06.948326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:41.270 [2024-11-18 10:55:06.948333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:41.270 [2024-11-18 10:55:06.948341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:41.270 [2024-11-18 10:55:06.948357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:41.270 [2024-11-18 10:55:06.948363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:41.270 [2024-11-18 10:55:06.948369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:41.270 [2024-11-18 10:55:06.948377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.270 [2024-11-18 10:55:06.948384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:41.270 [2024-11-18 10:55:06.948393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:26:41.270 [2024-11-18 10:55:06.948405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:06.962432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.270 [2024-11-18 10:55:06.962596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:41.270 [2024-11-18 10:55:06.962614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.973 ms 00:26:41.270 [2024-11-18 10:55:06.962623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:06.963012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.270 [2024-11-18 10:55:06.963021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:41.270 [2024-11-18 10:55:06.963048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:26:41.270 [2024-11-18 10:55:06.963056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:06.999911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.270 [2024-11-18 10:55:06.999953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.270 [2024-11-18 10:55:06.999965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.270 [2024-11-18 10:55:06.999975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:07.000045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.270 [2024-11-18 10:55:07.000056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.270 [2024-11-18 10:55:07.000070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.270 [2024-11-18 10:55:07.000080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:07.000179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.270 [2024-11-18 10:55:07.000191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.270 [2024-11-18 10:55:07.000201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.270 [2024-11-18 10:55:07.000242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:07.000259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.270 [2024-11-18 10:55:07.000269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.270 [2024-11-18 10:55:07.000278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.270 [2024-11-18 10:55:07.000290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.270 [2024-11-18 10:55:07.084710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.270 [2024-11-18 10:55:07.084759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.270 [2024-11-18 10:55:07.084772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.270 [2024-11-18 10:55:07.084781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.153935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.153979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.532 [2024-11-18 10:55:07.153991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.532 [2024-11-18 10:55:07.154094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.532 [2024-11-18 10:55:07.154186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.532 [2024-11-18 10:55:07.154355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:41.532 [2024-11-18 10:55:07.154415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.532 [2024-11-18 10:55:07.154491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.532 [2024-11-18 10:55:07.154561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.532 [2024-11-18 10:55:07.154570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.532 [2024-11-18 10:55:07.154578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-11-18 10:55:07.154717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.685 ms, result 0 00:26:42.476 00:26:42.476 00:26:42.476 10:55:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:45.026 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77556 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77556 ']' 00:26:45.026 Process with pid 77556 is not found 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77556 00:26:45.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77556) - No such process 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77556 is not found' 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:45.026 Remove shared memory files 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:45.026 10:55:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:45.288 ************************************ 00:26:45.288 END TEST ftl_dirty_shutdown 00:26:45.288 ************************************ 00:26:45.288 00:26:45.288 real 4m21.064s 00:26:45.288 user 4m53.281s 00:26:45.288 sys 0m28.126s 00:26:45.288 10:55:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.288 10:55:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:45.288 10:55:10 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:45.288 10:55:10 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:45.288 10:55:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.288 10:55:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:45.288 ************************************ 00:26:45.288 START TEST ftl_upgrade_shutdown 00:26:45.288 ************************************ 00:26:45.288 10:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:45.288 * Looking for test storage... 00:26:45.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:45.288 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.289 --rc genhtml_branch_coverage=1 00:26:45.289 --rc genhtml_function_coverage=1 00:26:45.289 --rc genhtml_legend=1 00:26:45.289 --rc geninfo_all_blocks=1 00:26:45.289 --rc geninfo_unexecuted_blocks=1 00:26:45.289 00:26:45.289 ' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.289 --rc genhtml_branch_coverage=1 00:26:45.289 --rc genhtml_function_coverage=1 00:26:45.289 --rc genhtml_legend=1 00:26:45.289 --rc geninfo_all_blocks=1 00:26:45.289 --rc geninfo_unexecuted_blocks=1 00:26:45.289 00:26:45.289 ' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.289 --rc genhtml_branch_coverage=1 00:26:45.289 --rc genhtml_function_coverage=1 00:26:45.289 --rc genhtml_legend=1 00:26:45.289 --rc geninfo_all_blocks=1 00:26:45.289 --rc geninfo_unexecuted_blocks=1 00:26:45.289 00:26:45.289 ' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.289 --rc genhtml_branch_coverage=1 00:26:45.289 --rc genhtml_function_coverage=1 00:26:45.289 --rc genhtml_legend=1 00:26:45.289 --rc geninfo_all_blocks=1 00:26:45.289 --rc geninfo_unexecuted_blocks=1 00:26:45.289 00:26:45.289 ' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80361 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80361 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80361 ']' 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.289 10:55:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:45.550 [2024-11-18 10:55:11.243944] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:45.550 [2024-11-18 10:55:11.244306] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80361 ] 00:26:45.550 [2024-11-18 10:55:11.409879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.811 [2024-11-18 10:55:11.536189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.383 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.383 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:46.383 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:46.383 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:46.383 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:46.384 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:46.645 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:46.907 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.907 { 00:26:46.907 "name": "basen1", 00:26:46.907 "aliases": [ 00:26:46.907 "9787a42d-ac95-4efd-a6d3-cde5e5ed2e42" 00:26:46.907 ], 00:26:46.907 "product_name": "NVMe disk", 00:26:46.907 "block_size": 4096, 00:26:46.907 "num_blocks": 1310720, 00:26:46.907 "uuid": "9787a42d-ac95-4efd-a6d3-cde5e5ed2e42", 00:26:46.907 "numa_id": -1, 00:26:46.907 "assigned_rate_limits": { 00:26:46.907 "rw_ios_per_sec": 0, 00:26:46.907 "rw_mbytes_per_sec": 0, 00:26:46.907 "r_mbytes_per_sec": 0, 00:26:46.907 "w_mbytes_per_sec": 0 00:26:46.907 }, 00:26:46.907 "claimed": true, 00:26:46.907 "claim_type": "read_many_write_one", 00:26:46.907 "zoned": false, 00:26:46.907 "supported_io_types": { 00:26:46.907 "read": true, 00:26:46.907 "write": true, 00:26:46.907 "unmap": true, 00:26:46.907 "flush": true, 00:26:46.907 "reset": true, 00:26:46.907 "nvme_admin": true, 00:26:46.907 "nvme_io": true, 00:26:46.907 "nvme_io_md": false, 00:26:46.907 "write_zeroes": true, 00:26:46.907 "zcopy": false, 00:26:46.907 "get_zone_info": false, 00:26:46.907 "zone_management": false, 00:26:46.907 "zone_append": false, 00:26:46.907 "compare": true, 00:26:46.907 "compare_and_write": false, 00:26:46.907 "abort": true, 00:26:46.907 "seek_hole": false, 00:26:46.907 "seek_data": false, 00:26:46.907 "copy": true, 00:26:46.907 "nvme_iov_md": false 00:26:46.907 }, 00:26:46.907 "driver_specific": { 00:26:46.907 "nvme": [ 00:26:46.907 { 00:26:46.907 "pci_address": "0000:00:11.0", 00:26:46.907 "trid": { 00:26:46.907 "trtype": "PCIe", 00:26:46.907 "traddr": "0000:00:11.0" 00:26:46.907 }, 00:26:46.907 "ctrlr_data": { 00:26:46.907 "cntlid": 0, 00:26:46.907 "vendor_id": "0x1b36", 00:26:46.907 "model_number": "QEMU NVMe Ctrl", 00:26:46.907 "serial_number": "12341", 00:26:46.907 "firmware_revision": "8.0.0", 00:26:46.907 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:46.907 "oacs": { 00:26:46.907 "security": 0, 00:26:46.907 "format": 1, 00:26:46.907 "firmware": 0, 00:26:46.907 "ns_manage": 1 00:26:46.907 }, 00:26:46.907 "multi_ctrlr": false, 00:26:46.907 "ana_reporting": false 00:26:46.907 }, 00:26:46.907 "vs": { 00:26:46.907 "nvme_version": "1.4" 00:26:46.907 }, 00:26:46.907 "ns_data": { 00:26:46.907 "id": 1, 00:26:46.907 "can_share": false 00:26:46.907 } 00:26:46.907 } 00:26:46.907 ], 00:26:46.907 "mp_policy": "active_passive" 00:26:46.907 } 00:26:46.907 } 00:26:46.907 ]' 00:26:46.907 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.907 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.907 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:47.168 10:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:47.168 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85 00:26:47.168 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:47.168 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8b46b28-3cdc-43b0-99c3-dcc9e35b5a85 00:26:47.430 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:47.691 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c16cdd4f-316c-466c-a546-3b0a7df6faf0 00:26:47.691 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c16cdd4f-316c-466c-a546-3b0a7df6faf0 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2789cbbd-d7e2-463e-94d2-45e8b104979a 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2789cbbd-d7e2-463e-94d2-45e8b104979a ]] 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2789cbbd-d7e2-463e-94d2-45e8b104979a 5120 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2789cbbd-d7e2-463e-94d2-45e8b104979a 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2789cbbd-d7e2-463e-94d2-45e8b104979a 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2789cbbd-d7e2-463e-94d2-45e8b104979a 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:47.952 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2789cbbd-d7e2-463e-94d2-45e8b104979a 00:26:48.213 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:48.213 { 00:26:48.213 "name": "2789cbbd-d7e2-463e-94d2-45e8b104979a", 00:26:48.213 "aliases": [ 00:26:48.213 "lvs/basen1p0" 00:26:48.213 ], 00:26:48.213 "product_name": "Logical Volume", 00:26:48.213 "block_size": 4096, 00:26:48.213 "num_blocks": 5242880, 00:26:48.213 "uuid": "2789cbbd-d7e2-463e-94d2-45e8b104979a", 00:26:48.213 "assigned_rate_limits": { 00:26:48.213 "rw_ios_per_sec": 0, 00:26:48.213 "rw_mbytes_per_sec": 0, 00:26:48.213 "r_mbytes_per_sec": 0, 00:26:48.213 "w_mbytes_per_sec": 0 00:26:48.213 }, 00:26:48.213 "claimed": false, 00:26:48.213 "zoned": false, 00:26:48.213 "supported_io_types": { 00:26:48.213 "read": true, 00:26:48.213 "write": true, 00:26:48.213 "unmap": true, 00:26:48.213 "flush": false, 00:26:48.213 "reset": true, 00:26:48.213 "nvme_admin": false, 00:26:48.213 "nvme_io": false, 00:26:48.213 "nvme_io_md": false, 00:26:48.213 "write_zeroes": true, 00:26:48.213 "zcopy": false, 00:26:48.213 "get_zone_info": false, 00:26:48.213 "zone_management": false, 00:26:48.213 "zone_append": false, 00:26:48.213 "compare": false, 00:26:48.213 "compare_and_write": false, 00:26:48.213 "abort": false, 00:26:48.213 "seek_hole": true, 00:26:48.213 "seek_data": true, 00:26:48.213 "copy": false, 00:26:48.213 "nvme_iov_md": false 00:26:48.213 }, 00:26:48.213 "driver_specific": { 00:26:48.213 "lvol": { 00:26:48.214 "lvol_store_uuid": "c16cdd4f-316c-466c-a546-3b0a7df6faf0", 00:26:48.214 "base_bdev": "basen1", 00:26:48.214 "thin_provision": true, 00:26:48.214 "num_allocated_clusters": 0, 00:26:48.214 "snapshot": false, 00:26:48.214 "clone": false, 00:26:48.214 "esnap_clone": false 00:26:48.214 } 00:26:48.214 } 00:26:48.214 } 00:26:48.214 ]' 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:48.214 10:55:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:48.474 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:48.474 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:48.474 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:48.735 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:48.735 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:48.735 10:55:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2789cbbd-d7e2-463e-94d2-45e8b104979a -c cachen1p0 --l2p_dram_limit 2 00:26:48.997 [2024-11-18 10:55:14.632366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.632519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:48.997 [2024-11-18 10:55:14.632539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:48.997 [2024-11-18 10:55:14.632546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.632599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.632607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:48.997 [2024-11-18 10:55:14.632615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:48.997 [2024-11-18 10:55:14.632621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.632639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:48.997 [2024-11-18 10:55:14.633234] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:48.997 [2024-11-18 10:55:14.633250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.633256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:48.997 [2024-11-18 10:55:14.633264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:26:48.997 [2024-11-18 10:55:14.633270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.633323] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6089a30d-d95d-4895-84aa-791f092cefb7 00:26:48.997 [2024-11-18 10:55:14.634293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.634316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:48.997 [2024-11-18 10:55:14.634324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:48.997 [2024-11-18 10:55:14.634332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.639099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.639128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:48.997 [2024-11-18 10:55:14.639137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.729 ms 00:26:48.997 [2024-11-18 10:55:14.639144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.639175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.639183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:48.997 [2024-11-18 10:55:14.639189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:48.997 [2024-11-18 10:55:14.639198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.639243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.639252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:48.997 [2024-11-18 10:55:14.639258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:48.997 [2024-11-18 10:55:14.639269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.639285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:48.997 [2024-11-18 10:55:14.642263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.642358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:48.997 [2024-11-18 10:55:14.642411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.978 ms 00:26:48.997 [2024-11-18 10:55:14.642430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.642462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.642565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:48.997 [2024-11-18 10:55:14.642611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:48.997 [2024-11-18 10:55:14.642626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.642649] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:48.997 [2024-11-18 10:55:14.642763] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:48.997 [2024-11-18 10:55:14.642855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:48.997 [2024-11-18 10:55:14.642880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:48.997 [2024-11-18 10:55:14.642906] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:48.997 [2024-11-18 10:55:14.642930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:48.997 [2024-11-18 10:55:14.642954] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:48.997 [2024-11-18 10:55:14.642970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:48.997 [2024-11-18 10:55:14.643111] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:48.997 [2024-11-18 10:55:14.643130] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:48.997 [2024-11-18 10:55:14.643147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.643163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:48.997 [2024-11-18 10:55:14.643180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:26:48.997 [2024-11-18 10:55:14.643194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.643326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.997 [2024-11-18 10:55:14.643347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:48.997 [2024-11-18 10:55:14.643366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:48.997 [2024-11-18 10:55:14.643386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.997 [2024-11-18 10:55:14.643484] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:48.997 [2024-11-18 10:55:14.643538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:48.997 [2024-11-18 10:55:14.643558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:48.997 [2024-11-18 10:55:14.643575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:48.997 [2024-11-18 10:55:14.643606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:48.997 [2024-11-18 10:55:14.643659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:48.997 [2024-11-18 10:55:14.643675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:48.997 [2024-11-18 10:55:14.643707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:48.997 [2024-11-18 10:55:14.643741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:48.997 [2024-11-18 10:55:14.643783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:48.997 [2024-11-18 10:55:14.643816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:48.997 [2024-11-18 10:55:14.643831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:48.997 [2024-11-18 10:55:14.643863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:48.997 [2024-11-18 10:55:14.643879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.997 [2024-11-18 10:55:14.643893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:48.997 [2024-11-18 10:55:14.643908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:48.997 [2024-11-18 10:55:14.643949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:48.997 [2024-11-18 10:55:14.643967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:48.998 [2024-11-18 10:55:14.643982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:48.998 [2024-11-18 10:55:14.643998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:48.998 [2024-11-18 10:55:14.644028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:48.998 [2024-11-18 10:55:14.644044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:48.998 [2024-11-18 10:55:14.644074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:48.998 [2024-11-18 10:55:14.644175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:48.998 [2024-11-18 10:55:14.644189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:48.998 [2024-11-18 10:55:14.644195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:48.998 [2024-11-18 10:55:14.644216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:48.998 [2024-11-18 10:55:14.644235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:48.998 [2024-11-18 10:55:14.644252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:48.998 [2024-11-18 10:55:14.644259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644264] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:48.998 [2024-11-18 10:55:14.644272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:48.998 [2024-11-18 10:55:14.644277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.998 [2024-11-18 10:55:14.644291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:48.998 [2024-11-18 10:55:14.644300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:48.998 [2024-11-18 10:55:14.644305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:48.998 [2024-11-18 10:55:14.644313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:48.998 [2024-11-18 10:55:14.644318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:48.998 [2024-11-18 10:55:14.644324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:48.998 [2024-11-18 10:55:14.644332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:48.998 [2024-11-18 10:55:14.644341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:48.998 [2024-11-18 10:55:14.644356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:48.998 [2024-11-18 10:55:14.644375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:48.998 [2024-11-18 10:55:14.644382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:48.998 [2024-11-18 10:55:14.644387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:48.998 [2024-11-18 10:55:14.644394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:48.998 [2024-11-18 10:55:14.644448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:48.998 [2024-11-18 10:55:14.644456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:48.998 [2024-11-18 10:55:14.644469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:48.998 [2024-11-18 10:55:14.644475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:48.998 [2024-11-18 10:55:14.644481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:48.998 [2024-11-18 10:55:14.644490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.998 [2024-11-18 10:55:14.644498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:48.998 [2024-11-18 10:55:14.644504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:26:48.998 [2024-11-18 10:55:14.644511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.998 [2024-11-18 10:55:14.644549] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:48.998 [2024-11-18 10:55:14.644560] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:53.208 [2024-11-18 10:55:18.238183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.238271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:53.208 [2024-11-18 10:55:18.238290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3593.618 ms 00:26:53.208 [2024-11-18 10:55:18.238302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.208 [2024-11-18 10:55:18.269716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.269936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:53.208 [2024-11-18 10:55:18.269959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.170 ms 00:26:53.208 [2024-11-18 10:55:18.269972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.208 [2024-11-18 10:55:18.270057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.270071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:53.208 [2024-11-18 10:55:18.270082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:53.208 [2024-11-18 10:55:18.270097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.208 [2024-11-18 10:55:18.305760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.305810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:53.208 [2024-11-18 10:55:18.305823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.622 ms 00:26:53.208 [2024-11-18 10:55:18.305836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.208 [2024-11-18 10:55:18.305869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.305884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:53.208 [2024-11-18 10:55:18.305893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:53.208 [2024-11-18 10:55:18.305904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.208 [2024-11-18 10:55:18.306539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.208 [2024-11-18 10:55:18.306568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:53.208 [2024-11-18 10:55:18.306578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:26:53.209 [2024-11-18 10:55:18.306589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.306642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.306653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:53.209 [2024-11-18 10:55:18.306665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:53.209 [2024-11-18 10:55:18.306678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.323756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.323801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:53.209 [2024-11-18 10:55:18.323812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.059 ms 00:26:53.209 [2024-11-18 10:55:18.323823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.337049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:53.209 [2024-11-18 10:55:18.338307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.338342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:53.209 [2024-11-18 10:55:18.338356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.400 ms 00:26:53.209 [2024-11-18 10:55:18.338365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.376149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.376221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:53.209 [2024-11-18 10:55:18.376241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.748 ms 00:26:53.209 [2024-11-18 10:55:18.376250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.376333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.376348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:53.209 [2024-11-18 10:55:18.376362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:53.209 [2024-11-18 10:55:18.376372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.401143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.401188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:53.209 [2024-11-18 10:55:18.401220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.676 ms 00:26:53.209 [2024-11-18 10:55:18.401229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.425674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.425716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:53.209 [2024-11-18 10:55:18.425731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.408 ms 00:26:53.209 [2024-11-18 10:55:18.425739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.426350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.426369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:53.209 [2024-11-18 10:55:18.426382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:26:53.209 [2024-11-18 10:55:18.426391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.508887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.509076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:53.209 [2024-11-18 10:55:18.509106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.446 ms 00:26:53.209 [2024-11-18 10:55:18.509115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.536131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.536316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:53.209 [2024-11-18 10:55:18.536349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.925 ms 00:26:53.209 [2024-11-18 10:55:18.536358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.562288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.562350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:53.209 [2024-11-18 10:55:18.562369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.642 ms 00:26:53.209 [2024-11-18 10:55:18.562377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.588123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.588313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:53.209 [2024-11-18 10:55:18.588339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.693 ms 00:26:53.209 [2024-11-18 10:55:18.588348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.588398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.588421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:53.209 [2024-11-18 10:55:18.588436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:53.209 [2024-11-18 10:55:18.588445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.588537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.209 [2024-11-18 10:55:18.588548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:53.209 [2024-11-18 10:55:18.588562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:53.209 [2024-11-18 10:55:18.588570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.209 [2024-11-18 10:55:18.589717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3956.816 ms, result 0 00:26:53.209 { 00:26:53.209 "name": "ftl", 00:26:53.209 "uuid": "6089a30d-d95d-4895-84aa-791f092cefb7" 00:26:53.209 } 00:26:53.209 10:55:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:53.209 [2024-11-18 10:55:18.800918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.209 10:55:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:53.209 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:53.482 [2024-11-18 10:55:19.221392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:53.483 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:53.746 [2024-11-18 10:55:19.426741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:53.746 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:54.007 Fill FTL, iteration 1 00:26:54.007 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:54.007 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:54.007 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:54.007 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:54.007 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80489 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80489 /var/tmp/spdk.tgt.sock 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80489 ']' 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:54.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.008 10:55:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:54.008 [2024-11-18 10:55:19.882296] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:54.008 [2024-11-18 10:55:19.882694] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80489 ] 00:26:54.269 [2024-11-18 10:55:20.049161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.531 [2024-11-18 10:55:20.181412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.102 10:55:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.102 10:55:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:55.102 10:55:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:55.364 ftln1 00:26:55.364 10:55:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:55.364 10:55:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80489 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80489 ']' 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80489 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80489 00:26:55.626 killing process with pid 80489 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80489' 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80489 00:26:55.626 10:55:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80489 00:26:57.013 10:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:57.013 10:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:57.013 [2024-11-18 10:55:22.877457] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:57.013 [2024-11-18 10:55:22.877569] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80533 ] 00:26:57.274 [2024-11-18 10:55:23.035653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.274 [2024-11-18 10:55:23.139441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.659  [2024-11-18T10:55:25.930Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-18T10:55:26.866Z] Copying: 378/1024 [MB] (186 MBps) [2024-11-18T10:55:27.802Z] Copying: 566/1024 [MB] (188 MBps) [2024-11-18T10:55:28.738Z] Copying: 801/1024 [MB] (235 MBps) [2024-11-18T10:55:29.305Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:27:03.421 00:27:03.421 Calculate MD5 checksum, iteration 1 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:03.421 10:55:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:03.421 [2024-11-18 10:55:29.104354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:03.421 [2024-11-18 10:55:29.104765] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80601 ] 00:27:03.421 [2024-11-18 10:55:29.260131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.689 [2024-11-18 10:55:29.351919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.076  [2024-11-18T10:55:31.528Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-18T10:55:32.097Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:27:06.213 00:27:06.213 10:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:06.213 10:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:08.126 Fill FTL, iteration 2 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=91ead2ff4d7d590e11375509a51fb172 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:08.126 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:08.127 10:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:08.127 [2024-11-18 10:55:33.973418] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:08.127 [2024-11-18 10:55:33.973520] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80652 ] 00:27:08.386 [2024-11-18 10:55:34.123365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.386 [2024-11-18 10:55:34.214562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.760  [2024-11-18T10:55:36.579Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-18T10:55:37.954Z] Copying: 465/1024 [MB] (239 MBps) [2024-11-18T10:55:38.890Z] Copying: 694/1024 [MB] (229 MBps) [2024-11-18T10:55:39.151Z] Copying: 932/1024 [MB] (238 MBps) [2024-11-18T10:55:39.717Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:27:13.833 00:27:13.833 Calculate MD5 checksum, iteration 2 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:13.833 10:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.833 [2024-11-18 10:55:39.604056] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:13.833 [2024-11-18 10:55:39.604170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80715 ] 00:27:14.093 [2024-11-18 10:55:39.762128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.093 [2024-11-18 10:55:39.850376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.469  [2024-11-18T10:55:41.920Z] Copying: 638/1024 [MB] (638 MBps) [2024-11-18T10:55:42.858Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:27:16.974 00:27:16.974 10:55:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:16.974 10:55:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:19.606 10:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:19.606 10:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=018627679955e91aff96292c639cf454 00:27:19.606 10:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:19.606 10:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.606 10:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:19.606 [2024-11-18 10:55:45.133488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.606 [2024-11-18 10:55:45.133529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.606 [2024-11-18 10:55:45.133540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:19.606 [2024-11-18 10:55:45.133546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.606 [2024-11-18 10:55:45.133564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.606 [2024-11-18 10:55:45.133571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.606 [2024-11-18 10:55:45.133577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.606 [2024-11-18 10:55:45.133585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.606 [2024-11-18 10:55:45.133601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.606 [2024-11-18 10:55:45.133607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.606 [2024-11-18 10:55:45.133613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:19.606 [2024-11-18 10:55:45.133619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.606 [2024-11-18 10:55:45.133668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.170 ms, result 0 00:27:19.606 true 00:27:19.606 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.606 { 00:27:19.606 "name": "ftl", 00:27:19.606 "properties": [ 00:27:19.606 { 00:27:19.606 "name": "superblock_version", 00:27:19.606 "value": 5, 00:27:19.606 "read-only": true 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "name": "base_device", 00:27:19.606 "bands": [ 00:27:19.606 { 00:27:19.606 "id": 0, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 1, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 2, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 3, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 4, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 5, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 6, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 7, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 8, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 9, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 10, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 11, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 12, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 13, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 14, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 15, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 16, 00:27:19.606 "state": "FREE", 00:27:19.606 "validity": 0.0 00:27:19.606 }, 00:27:19.606 { 00:27:19.606 "id": 17, 00:27:19.606 "state": "FREE", 00:27:19.607 "validity": 0.0 00:27:19.607 } 00:27:19.607 ], 00:27:19.607 "read-only": true 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "name": "cache_device", 00:27:19.607 "type": "bdev", 00:27:19.607 "chunks": [ 00:27:19.607 { 00:27:19.607 "id": 0, 00:27:19.607 "state": "INACTIVE", 00:27:19.607 "utilization": 0.0 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "id": 1, 00:27:19.607 "state": "CLOSED", 00:27:19.607 "utilization": 1.0 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "id": 2, 00:27:19.607 "state": "CLOSED", 00:27:19.607 "utilization": 1.0 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "id": 3, 00:27:19.607 "state": "OPEN", 00:27:19.607 "utilization": 0.001953125 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "id": 4, 00:27:19.607 "state": "OPEN", 00:27:19.607 "utilization": 0.0 00:27:19.607 } 00:27:19.607 ], 00:27:19.607 "read-only": true 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "name": "verbose_mode", 00:27:19.607 "value": true, 00:27:19.607 "unit": "", 00:27:19.607 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:19.607 }, 00:27:19.607 { 00:27:19.607 "name": "prep_upgrade_on_shutdown", 00:27:19.607 "value": false, 00:27:19.607 "unit": "", 00:27:19.607 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:19.607 } 00:27:19.607 ] 00:27:19.607 } 00:27:19.607 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:19.867 [2024-11-18 10:55:45.565818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.867 [2024-11-18 10:55:45.565986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.867 [2024-11-18 10:55:45.566039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:19.867 [2024-11-18 10:55:45.566058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.867 [2024-11-18 10:55:45.566091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.867 [2024-11-18 10:55:45.566109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.867 [2024-11-18 10:55:45.566123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.867 [2024-11-18 10:55:45.566138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.867 [2024-11-18 10:55:45.566161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.867 [2024-11-18 10:55:45.566177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.867 [2024-11-18 10:55:45.566192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:19.867 [2024-11-18 10:55:45.566252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.867 [2024-11-18 10:55:45.566323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.490 ms, result 0 00:27:19.867 true 00:27:19.867 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:19.867 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:19.867 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:20.126 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:20.126 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:20.126 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:20.126 [2024-11-18 10:55:45.970127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.126 [2024-11-18 10:55:45.970265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:20.126 [2024-11-18 10:55:45.970313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:20.126 [2024-11-18 10:55:45.970333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.126 [2024-11-18 10:55:45.970363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.126 [2024-11-18 10:55:45.970379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:20.126 [2024-11-18 10:55:45.970394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.126 [2024-11-18 10:55:45.970408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.126 [2024-11-18 10:55:45.970432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.126 [2024-11-18 10:55:45.970448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:20.126 [2024-11-18 10:55:45.970463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:20.126 [2024-11-18 10:55:45.970502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.126 [2024-11-18 10:55:45.970560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.421 ms, result 0 00:27:20.126 true 00:27:20.126 10:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:20.386 { 00:27:20.386 "name": "ftl", 00:27:20.386 "properties": [ 00:27:20.386 { 00:27:20.386 "name": "superblock_version", 00:27:20.386 "value": 5, 00:27:20.386 "read-only": true 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "name": "base_device", 00:27:20.386 "bands": [ 00:27:20.386 { 00:27:20.386 "id": 0, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 1, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 2, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 3, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 4, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 5, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 6, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 7, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 8, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 9, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 10, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 11, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 12, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 13, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 14, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 15, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 16, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 17, 00:27:20.386 "state": "FREE", 00:27:20.386 "validity": 0.0 00:27:20.386 } 00:27:20.386 ], 00:27:20.386 "read-only": true 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "name": "cache_device", 00:27:20.386 "type": "bdev", 00:27:20.386 "chunks": [ 00:27:20.386 { 00:27:20.386 "id": 0, 00:27:20.386 "state": "INACTIVE", 00:27:20.386 "utilization": 0.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 1, 00:27:20.386 "state": "CLOSED", 00:27:20.386 "utilization": 1.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 2, 00:27:20.386 "state": "CLOSED", 00:27:20.386 "utilization": 1.0 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 3, 00:27:20.386 "state": "OPEN", 00:27:20.386 "utilization": 0.001953125 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "id": 4, 00:27:20.386 "state": "OPEN", 00:27:20.386 "utilization": 0.0 00:27:20.386 } 00:27:20.386 ], 00:27:20.386 "read-only": true 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "name": "verbose_mode", 00:27:20.386 "value": true, 00:27:20.386 "unit": "", 00:27:20.386 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:20.386 }, 00:27:20.386 { 00:27:20.386 "name": "prep_upgrade_on_shutdown", 00:27:20.386 "value": true, 00:27:20.386 "unit": "", 00:27:20.386 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:20.386 } 00:27:20.386 ] 00:27:20.386 } 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80361 ]] 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80361 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80361 ']' 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80361 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80361 00:27:20.386 killing process with pid 80361 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80361' 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80361 00:27:20.386 10:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80361 00:27:20.956 [2024-11-18 10:55:46.715303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:20.956 [2024-11-18 10:55:46.727505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.956 [2024-11-18 10:55:46.727540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:20.956 [2024-11-18 10:55:46.727550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:20.956 [2024-11-18 10:55:46.727557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.956 [2024-11-18 10:55:46.727573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:20.956 [2024-11-18 10:55:46.729693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.956 [2024-11-18 10:55:46.729719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:20.956 [2024-11-18 10:55:46.729727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.109 ms 00:27:20.956 [2024-11-18 10:55:46.729733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.277709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.277765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:29.093 [2024-11-18 10:55:54.277777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7547.932 ms 00:27:29.093 [2024-11-18 10:55:54.277787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.278792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.278806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:29.093 [2024-11-18 10:55:54.278813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.993 ms 00:27:29.093 [2024-11-18 10:55:54.278819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.279675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.279694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:29.093 [2024-11-18 10:55:54.279701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:27:29.093 [2024-11-18 10:55:54.279706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.287292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.287318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:29.093 [2024-11-18 10:55:54.287326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.558 ms 00:27:29.093 [2024-11-18 10:55:54.287332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.292167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.292193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:29.093 [2024-11-18 10:55:54.292201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.810 ms 00:27:29.093 [2024-11-18 10:55:54.292220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.292263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.292270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:29.093 [2024-11-18 10:55:54.292280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:29.093 [2024-11-18 10:55:54.292286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.299513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.299538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:29.093 [2024-11-18 10:55:54.299545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.216 ms 00:27:29.093 [2024-11-18 10:55:54.299551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.306568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.306591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:29.093 [2024-11-18 10:55:54.306598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.992 ms 00:27:29.093 [2024-11-18 10:55:54.306603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.313152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.313176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:29.093 [2024-11-18 10:55:54.313183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.525 ms 00:27:29.093 [2024-11-18 10:55:54.313188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.320094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.093 [2024-11-18 10:55:54.320118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:29.093 [2024-11-18 10:55:54.320124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.848 ms 00:27:29.093 [2024-11-18 10:55:54.320130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.093 [2024-11-18 10:55:54.320153] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:29.093 [2024-11-18 10:55:54.320163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:29.093 [2024-11-18 10:55:54.320171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:29.093 [2024-11-18 10:55:54.320185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:29.093 [2024-11-18 10:55:54.320191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:29.093 [2024-11-18 10:55:54.320197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:29.094 [2024-11-18 10:55:54.320292] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:29.094 [2024-11-18 10:55:54.320298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6089a30d-d95d-4895-84aa-791f092cefb7 00:27:29.094 [2024-11-18 10:55:54.320304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:29.094 [2024-11-18 10:55:54.320309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:29.094 [2024-11-18 10:55:54.320314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:29.094 [2024-11-18 10:55:54.320320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:29.094 [2024-11-18 10:55:54.320325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:29.094 [2024-11-18 10:55:54.320332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:29.094 [2024-11-18 10:55:54.320338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:29.094 [2024-11-18 10:55:54.320343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:29.094 [2024-11-18 10:55:54.320347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:29.094 [2024-11-18 10:55:54.320353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.094 [2024-11-18 10:55:54.320362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:29.094 [2024-11-18 10:55:54.320369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:27:29.094 [2024-11-18 10:55:54.320375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.330049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.094 [2024-11-18 10:55:54.330074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:29.094 [2024-11-18 10:55:54.330081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.662 ms 00:27:29.094 [2024-11-18 10:55:54.330091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.330377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.094 [2024-11-18 10:55:54.330385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:29.094 [2024-11-18 10:55:54.330391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:27:29.094 [2024-11-18 10:55:54.330396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.363301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.363325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:29.094 [2024-11-18 10:55:54.363336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.363342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.363362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.363369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:29.094 [2024-11-18 10:55:54.363375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.363381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.363424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.363432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:29.094 [2024-11-18 10:55:54.363438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.363443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.363458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.363464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:29.094 [2024-11-18 10:55:54.363470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.363475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.423409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.423440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:29.094 [2024-11-18 10:55:54.423448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.423458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:29.094 [2024-11-18 10:55:54.472613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:29.094 [2024-11-18 10:55:54.472695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:29.094 [2024-11-18 10:55:54.472750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:29.094 [2024-11-18 10:55:54.472839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:29.094 [2024-11-18 10:55:54.472882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:29.094 [2024-11-18 10:55:54.472929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.472968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:29.094 [2024-11-18 10:55:54.472977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:29.094 [2024-11-18 10:55:54.472983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:29.094 [2024-11-18 10:55:54.472988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.094 [2024-11-18 10:55:54.473078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7745.529 ms, result 0 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80895 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80895 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80895 ']' 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.306 10:55:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.567 [2024-11-18 10:55:59.262478] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:33.567 [2024-11-18 10:55:59.265346] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80895 ] 00:27:33.567 [2024-11-18 10:55:59.426673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.829 [2024-11-18 10:55:59.512938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.400 [2024-11-18 10:56:00.080296] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:34.400 [2024-11-18 10:56:00.080348] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:34.400 [2024-11-18 10:56:00.223480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.223642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:34.400 [2024-11-18 10:56:00.223658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:34.400 [2024-11-18 10:56:00.223665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.223710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.223718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:34.400 [2024-11-18 10:56:00.223724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:34.400 [2024-11-18 10:56:00.223730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.223750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:34.400 [2024-11-18 10:56:00.224318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:34.400 [2024-11-18 10:56:00.224330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.224336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:34.400 [2024-11-18 10:56:00.224343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:27:34.400 [2024-11-18 10:56:00.224349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.225313] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:34.400 [2024-11-18 10:56:00.234857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.234973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:34.400 [2024-11-18 10:56:00.234991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.546 ms 00:27:34.400 [2024-11-18 10:56:00.234997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.235037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.235044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:34.400 [2024-11-18 10:56:00.235051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:34.400 [2024-11-18 10:56:00.235056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.239417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.239443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:34.400 [2024-11-18 10:56:00.239451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.310 ms 00:27:34.400 [2024-11-18 10:56:00.239456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.239499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.239507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:34.400 [2024-11-18 10:56:00.239513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:34.400 [2024-11-18 10:56:00.239519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.239553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.239560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:34.400 [2024-11-18 10:56:00.239568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:34.400 [2024-11-18 10:56:00.239574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.239593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:34.400 [2024-11-18 10:56:00.242241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.242264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:34.400 [2024-11-18 10:56:00.242271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.653 ms 00:27:34.400 [2024-11-18 10:56:00.242279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.242301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.400 [2024-11-18 10:56:00.242307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:34.400 [2024-11-18 10:56:00.242313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:34.400 [2024-11-18 10:56:00.242318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.400 [2024-11-18 10:56:00.242333] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:34.400 [2024-11-18 10:56:00.242347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:34.400 [2024-11-18 10:56:00.242375] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:34.400 [2024-11-18 10:56:00.242386] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:34.400 [2024-11-18 10:56:00.242465] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:34.400 [2024-11-18 10:56:00.242473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:34.400 [2024-11-18 10:56:00.242481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:34.400 [2024-11-18 10:56:00.242489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:34.400 [2024-11-18 10:56:00.242496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:34.400 [2024-11-18 10:56:00.242504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:34.401 [2024-11-18 10:56:00.242509] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:34.401 [2024-11-18 10:56:00.242514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:34.401 [2024-11-18 10:56:00.242520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:34.401 [2024-11-18 10:56:00.242526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.401 [2024-11-18 10:56:00.242532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:34.401 [2024-11-18 10:56:00.242538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:27:34.401 [2024-11-18 10:56:00.242543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.401 [2024-11-18 10:56:00.242607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.401 [2024-11-18 10:56:00.242613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:34.401 [2024-11-18 10:56:00.242619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:34.401 [2024-11-18 10:56:00.242626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.401 [2024-11-18 10:56:00.242700] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:34.401 [2024-11-18 10:56:00.242708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:34.401 [2024-11-18 10:56:00.242714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:34.401 [2024-11-18 10:56:00.242730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:34.401 [2024-11-18 10:56:00.242740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:34.401 [2024-11-18 10:56:00.242746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:34.401 [2024-11-18 10:56:00.242751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:34.401 [2024-11-18 10:56:00.242761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:34.401 [2024-11-18 10:56:00.242766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:34.401 [2024-11-18 10:56:00.242778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:34.401 [2024-11-18 10:56:00.242783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:34.401 [2024-11-18 10:56:00.242793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:34.401 [2024-11-18 10:56:00.242798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:34.401 [2024-11-18 10:56:00.242808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:34.401 [2024-11-18 10:56:00.242822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:34.401 [2024-11-18 10:56:00.242842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:34.401 [2024-11-18 10:56:00.242857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:34.401 [2024-11-18 10:56:00.242871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:34.401 [2024-11-18 10:56:00.242886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:34.401 [2024-11-18 10:56:00.242901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:34.401 [2024-11-18 10:56:00.242915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:34.401 [2024-11-18 10:56:00.242920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242925] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:34.401 [2024-11-18 10:56:00.242931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:34.401 [2024-11-18 10:56:00.242937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:34.401 [2024-11-18 10:56:00.242954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:34.401 [2024-11-18 10:56:00.242959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:34.401 [2024-11-18 10:56:00.242964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:34.401 [2024-11-18 10:56:00.242969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:34.401 [2024-11-18 10:56:00.242974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:34.401 [2024-11-18 10:56:00.242979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:34.401 [2024-11-18 10:56:00.242985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:34.401 [2024-11-18 10:56:00.242992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.242997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:34.401 [2024-11-18 10:56:00.243003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:34.401 [2024-11-18 10:56:00.243019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:34.401 [2024-11-18 10:56:00.243024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:34.401 [2024-11-18 10:56:00.243029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:34.401 [2024-11-18 10:56:00.243034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:34.401 [2024-11-18 10:56:00.243072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:34.401 [2024-11-18 10:56:00.243078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:34.401 [2024-11-18 10:56:00.243090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:34.401 [2024-11-18 10:56:00.243095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:34.401 [2024-11-18 10:56:00.243100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:34.401 [2024-11-18 10:56:00.243106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.401 [2024-11-18 10:56:00.243111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:34.401 [2024-11-18 10:56:00.243117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:27:34.401 [2024-11-18 10:56:00.243122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.401 [2024-11-18 10:56:00.243154] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:34.401 [2024-11-18 10:56:00.243162] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:38.609 [2024-11-18 10:56:03.693058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.693127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:38.609 [2024-11-18 10:56:03.693145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3449.889 ms 00:27:38.609 [2024-11-18 10:56:03.693155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.724099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.724159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.609 [2024-11-18 10:56:03.724174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.701 ms 00:27:38.609 [2024-11-18 10:56:03.724183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.724425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.724448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:38.609 [2024-11-18 10:56:03.724459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:38.609 [2024-11-18 10:56:03.724468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.759502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.759548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:38.609 [2024-11-18 10:56:03.759561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.973 ms 00:27:38.609 [2024-11-18 10:56:03.759573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.759616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.759626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:38.609 [2024-11-18 10:56:03.759636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.609 [2024-11-18 10:56:03.759644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.760164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.760190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:38.609 [2024-11-18 10:56:03.760202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.466 ms 00:27:38.609 [2024-11-18 10:56:03.760247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.760301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.760313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:38.609 [2024-11-18 10:56:03.760322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:38.609 [2024-11-18 10:56:03.760331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.777866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.609 [2024-11-18 10:56:03.777907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:38.609 [2024-11-18 10:56:03.777919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.506 ms 00:27:38.609 [2024-11-18 10:56:03.777928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.609 [2024-11-18 10:56:03.792235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:38.610 [2024-11-18 10:56:03.792284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:38.610 [2024-11-18 10:56:03.792297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.792306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:38.610 [2024-11-18 10:56:03.792317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.250 ms 00:27:38.610 [2024-11-18 10:56:03.792324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.806961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.807007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:38.610 [2024-11-18 10:56:03.807020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.607 ms 00:27:38.610 [2024-11-18 10:56:03.807028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.819349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.819405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:38.610 [2024-11-18 10:56:03.819417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.265 ms 00:27:38.610 [2024-11-18 10:56:03.819425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.831799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.831842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:38.610 [2024-11-18 10:56:03.831853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.346 ms 00:27:38.610 [2024-11-18 10:56:03.831861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.832558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.832593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:38.610 [2024-11-18 10:56:03.832604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:27:38.610 [2024-11-18 10:56:03.832613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.905542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.905615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:38.610 [2024-11-18 10:56:03.905632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.906 ms 00:27:38.610 [2024-11-18 10:56:03.905642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.916988] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:38.610 [2024-11-18 10:56:03.918311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.918350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:38.610 [2024-11-18 10:56:03.918363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.608 ms 00:27:38.610 [2024-11-18 10:56:03.918372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.918468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.918484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:38.610 [2024-11-18 10:56:03.918495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:38.610 [2024-11-18 10:56:03.918503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.918564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.918576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:38.610 [2024-11-18 10:56:03.918585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:38.610 [2024-11-18 10:56:03.918594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.918617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.918627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:38.610 [2024-11-18 10:56:03.918636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:38.610 [2024-11-18 10:56:03.918647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.918686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:38.610 [2024-11-18 10:56:03.918698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.918707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:38.610 [2024-11-18 10:56:03.918715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:38.610 [2024-11-18 10:56:03.918724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.944090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.944143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:38.610 [2024-11-18 10:56:03.944157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.344 ms 00:27:38.610 [2024-11-18 10:56:03.944166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.944277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:03.944289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:38.610 [2024-11-18 10:56:03.944298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:38.610 [2024-11-18 10:56:03.944306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:03.945576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3721.575 ms, result 0 00:27:38.610 [2024-11-18 10:56:03.960548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.610 [2024-11-18 10:56:03.976543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:38.610 [2024-11-18 10:56:03.984720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:38.610 [2024-11-18 10:56:04.228710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:04.228761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.610 [2024-11-18 10:56:04.228776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.610 [2024-11-18 10:56:04.228788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:04.228811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:04.228820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.610 [2024-11-18 10:56:04.228829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.610 [2024-11-18 10:56:04.228837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:04.228858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.610 [2024-11-18 10:56:04.228867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.610 [2024-11-18 10:56:04.228876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.610 [2024-11-18 10:56:04.228884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.610 [2024-11-18 10:56:04.228946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.225 ms, result 0 00:27:38.610 true 00:27:38.610 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.610 { 00:27:38.610 "name": "ftl", 00:27:38.610 "properties": [ 00:27:38.610 { 00:27:38.610 "name": "superblock_version", 00:27:38.610 "value": 5, 00:27:38.610 "read-only": true 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "name": "base_device", 00:27:38.610 "bands": [ 00:27:38.610 { 00:27:38.610 "id": 0, 00:27:38.610 "state": "CLOSED", 00:27:38.610 "validity": 1.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 1, 00:27:38.610 "state": "CLOSED", 00:27:38.610 "validity": 1.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 2, 00:27:38.610 "state": "CLOSED", 00:27:38.610 "validity": 0.007843137254901933 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 3, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 4, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 5, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 6, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 7, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 8, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 9, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 10, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 11, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.610 "id": 12, 00:27:38.610 "state": "FREE", 00:27:38.610 "validity": 0.0 00:27:38.610 }, 00:27:38.610 { 00:27:38.611 "id": 13, 00:27:38.611 "state": "FREE", 00:27:38.611 "validity": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 14, 00:27:38.611 "state": "FREE", 00:27:38.611 "validity": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 15, 00:27:38.611 "state": "FREE", 00:27:38.611 "validity": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 16, 00:27:38.611 "state": "FREE", 00:27:38.611 "validity": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 17, 00:27:38.611 "state": "FREE", 00:27:38.611 "validity": 0.0 00:27:38.611 } 00:27:38.611 ], 00:27:38.611 "read-only": true 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "name": "cache_device", 00:27:38.611 "type": "bdev", 00:27:38.611 "chunks": [ 00:27:38.611 { 00:27:38.611 "id": 0, 00:27:38.611 "state": "INACTIVE", 00:27:38.611 "utilization": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 1, 00:27:38.611 "state": "OPEN", 00:27:38.611 "utilization": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 2, 00:27:38.611 "state": "OPEN", 00:27:38.611 "utilization": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 3, 00:27:38.611 "state": "FREE", 00:27:38.611 "utilization": 0.0 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "id": 4, 00:27:38.611 "state": "FREE", 00:27:38.611 "utilization": 0.0 00:27:38.611 } 00:27:38.611 ], 00:27:38.611 "read-only": true 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "name": "verbose_mode", 00:27:38.611 "value": true, 00:27:38.611 "unit": "", 00:27:38.611 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:38.611 }, 00:27:38.611 { 00:27:38.611 "name": "prep_upgrade_on_shutdown", 00:27:38.611 "value": false, 00:27:38.611 "unit": "", 00:27:38.611 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:38.611 } 00:27:38.611 ] 00:27:38.611 } 00:27:38.611 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:38.611 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:38.611 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.872 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:38.872 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:38.872 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:38.872 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.872 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:39.134 Validate MD5 checksum, iteration 1 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:39.134 10:56:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:39.134 [2024-11-18 10:56:04.987574] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:39.134 [2024-11-18 10:56:04.988061] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80975 ] 00:27:39.395 [2024-11-18 10:56:05.154580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.395 [2024-11-18 10:56:05.277189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.310  [2024-11-18T10:56:07.767Z] Copying: 552/1024 [MB] (552 MBps) [2024-11-18T10:56:08.710Z] Copying: 1024/1024 [MB] (average 558 MBps) 00:27:42.826 00:27:42.826 10:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:42.826 10:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=91ead2ff4d7d590e11375509a51fb172 00:27:45.373 Validate MD5 checksum, iteration 2 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 91ead2ff4d7d590e11375509a51fb172 != \9\1\e\a\d\2\f\f\4\d\7\d\5\9\0\e\1\1\3\7\5\5\0\9\a\5\1\f\b\1\7\2 ]] 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:45.373 10:56:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:45.373 [2024-11-18 10:56:10.971110] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:45.373 [2024-11-18 10:56:10.971795] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81042 ] 00:27:45.373 [2024-11-18 10:56:11.122342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.373 [2024-11-18 10:56:11.254178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.282  [2024-11-18T10:56:13.427Z] Copying: 697/1024 [MB] (697 MBps) [2024-11-18T10:56:13.998Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:27:48.114 00:27:48.114 10:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:48.114 10:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:50.017 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:50.017 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=018627679955e91aff96292c639cf454 00:27:50.017 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 018627679955e91aff96292c639cf454 != \0\1\8\6\2\7\6\7\9\9\5\5\e\9\1\a\f\f\9\6\2\9\2\c\6\3\9\c\f\4\5\4 ]] 00:27:50.017 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80895 ]] 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80895 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81094 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81094 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81094 ']' 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.018 10:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.018 [2024-11-18 10:56:15.672717] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:50.018 [2024-11-18 10:56:15.672832] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81094 ] 00:27:50.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80895 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:50.018 [2024-11-18 10:56:15.829931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.279 [2024-11-18 10:56:15.905797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.851 [2024-11-18 10:56:16.469974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.851 [2024-11-18 10:56:16.470023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.851 [2024-11-18 10:56:16.612896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.851 [2024-11-18 10:56:16.613012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:50.851 [2024-11-18 10:56:16.613027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.851 [2024-11-18 10:56:16.613034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.851 [2024-11-18 10:56:16.613078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.851 [2024-11-18 10:56:16.613085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:50.851 [2024-11-18 10:56:16.613092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:50.851 [2024-11-18 10:56:16.613098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.851 [2024-11-18 10:56:16.613117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:50.851 [2024-11-18 10:56:16.613662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:50.851 [2024-11-18 10:56:16.613675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.851 [2024-11-18 10:56:16.613681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:50.851 [2024-11-18 10:56:16.613688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:27:50.851 [2024-11-18 10:56:16.613693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.851 [2024-11-18 10:56:16.613926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:50.851 [2024-11-18 10:56:16.626250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.626279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:50.852 [2024-11-18 10:56:16.626288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.324 ms 00:27:50.852 [2024-11-18 10:56:16.626295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.633058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.633083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:50.852 [2024-11-18 10:56:16.633093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:50.852 [2024-11-18 10:56:16.633099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.633347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.633360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:50.852 [2024-11-18 10:56:16.633367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:27:50.852 [2024-11-18 10:56:16.633373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.633409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.633417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:50.852 [2024-11-18 10:56:16.633423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:50.852 [2024-11-18 10:56:16.633429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.633448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.633454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:50.852 [2024-11-18 10:56:16.633461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:50.852 [2024-11-18 10:56:16.633467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.633482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:50.852 [2024-11-18 10:56:16.635699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.635800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:50.852 [2024-11-18 10:56:16.635812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.220 ms 00:27:50.852 [2024-11-18 10:56:16.635817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.635840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.635846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:50.852 [2024-11-18 10:56:16.635852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:50.852 [2024-11-18 10:56:16.635858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.635874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:50.852 [2024-11-18 10:56:16.635887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:50.852 [2024-11-18 10:56:16.635913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:50.852 [2024-11-18 10:56:16.635926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:50.852 [2024-11-18 10:56:16.636005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:50.852 [2024-11-18 10:56:16.636013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:50.852 [2024-11-18 10:56:16.636021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:50.852 [2024-11-18 10:56:16.636028] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:50.852 [2024-11-18 10:56:16.636048] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:50.852 [2024-11-18 10:56:16.636054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:50.852 [2024-11-18 10:56:16.636059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:50.852 [2024-11-18 10:56:16.636065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.636072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:50.852 [2024-11-18 10:56:16.636077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:27:50.852 [2024-11-18 10:56:16.636083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.636146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.852 [2024-11-18 10:56:16.636153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:50.852 [2024-11-18 10:56:16.636158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:50.852 [2024-11-18 10:56:16.636163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.852 [2024-11-18 10:56:16.636252] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:50.852 [2024-11-18 10:56:16.636261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:50.852 [2024-11-18 10:56:16.636269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:50.852 [2024-11-18 10:56:16.636287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:50.852 [2024-11-18 10:56:16.636298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:50.852 [2024-11-18 10:56:16.636304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:50.852 [2024-11-18 10:56:16.636309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:50.852 [2024-11-18 10:56:16.636320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:50.852 [2024-11-18 10:56:16.636325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:50.852 [2024-11-18 10:56:16.636335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:50.852 [2024-11-18 10:56:16.636340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:50.852 [2024-11-18 10:56:16.636351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:50.852 [2024-11-18 10:56:16.636356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:50.852 [2024-11-18 10:56:16.636366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:50.852 [2024-11-18 10:56:16.636384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:50.852 [2024-11-18 10:56:16.636400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:50.852 [2024-11-18 10:56:16.636423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:50.852 [2024-11-18 10:56:16.636438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:50.852 [2024-11-18 10:56:16.636452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:50.852 [2024-11-18 10:56:16.636467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:50.852 [2024-11-18 10:56:16.636481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:50.852 [2024-11-18 10:56:16.636488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:50.852 [2024-11-18 10:56:16.636500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:50.852 [2024-11-18 10:56:16.636506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.852 [2024-11-18 10:56:16.636517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:50.852 [2024-11-18 10:56:16.636523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:50.852 [2024-11-18 10:56:16.636528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:50.852 [2024-11-18 10:56:16.636533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:50.852 [2024-11-18 10:56:16.636538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:50.852 [2024-11-18 10:56:16.636543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:50.852 [2024-11-18 10:56:16.636549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:50.852 [2024-11-18 10:56:16.636556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:50.853 [2024-11-18 10:56:16.636568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:50.853 [2024-11-18 10:56:16.636584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:50.853 [2024-11-18 10:56:16.636589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:50.853 [2024-11-18 10:56:16.636594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:50.853 [2024-11-18 10:56:16.636599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:50.853 [2024-11-18 10:56:16.636637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:50.853 [2024-11-18 10:56:16.636643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.853 [2024-11-18 10:56:16.636654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:50.853 [2024-11-18 10:56:16.636660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:50.853 [2024-11-18 10:56:16.636665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:50.853 [2024-11-18 10:56:16.636672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.636679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:50.853 [2024-11-18 10:56:16.636684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:27:50.853 [2024-11-18 10:56:16.636690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.655738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.655763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:50.853 [2024-11-18 10:56:16.655771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.012 ms 00:27:50.853 [2024-11-18 10:56:16.655777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.655804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.655812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:50.853 [2024-11-18 10:56:16.655819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:50.853 [2024-11-18 10:56:16.655825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.679878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.679902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:50.853 [2024-11-18 10:56:16.679909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.013 ms 00:27:50.853 [2024-11-18 10:56:16.679915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.679934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.679940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:50.853 [2024-11-18 10:56:16.679946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:50.853 [2024-11-18 10:56:16.679952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.680021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.680028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:50.853 [2024-11-18 10:56:16.680034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:50.853 [2024-11-18 10:56:16.680040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.680068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.680074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:50.853 [2024-11-18 10:56:16.680080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:50.853 [2024-11-18 10:56:16.680086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.691383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.691406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:50.853 [2024-11-18 10:56:16.691414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.281 ms 00:27:50.853 [2024-11-18 10:56:16.691420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.691493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.691501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:50.853 [2024-11-18 10:56:16.691508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.853 [2024-11-18 10:56:16.691514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.719557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.719669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:50.853 [2024-11-18 10:56:16.719683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.028 ms 00:27:50.853 [2024-11-18 10:56:16.719690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.853 [2024-11-18 10:56:16.726721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.853 [2024-11-18 10:56:16.726811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:50.853 [2024-11-18 10:56:16.726830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:27:50.853 [2024-11-18 10:56:16.726836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.769811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.769941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:51.115 [2024-11-18 10:56:16.769959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.931 ms 00:27:51.115 [2024-11-18 10:56:16.769965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.770057] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:51.115 [2024-11-18 10:56:16.770131] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:51.115 [2024-11-18 10:56:16.770203] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:51.115 [2024-11-18 10:56:16.770287] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:51.115 [2024-11-18 10:56:16.770294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.770301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:51.115 [2024-11-18 10:56:16.770308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:27:51.115 [2024-11-18 10:56:16.770314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.770357] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:51.115 [2024-11-18 10:56:16.770366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.770375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:51.115 [2024-11-18 10:56:16.770381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:51.115 [2024-11-18 10:56:16.770387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.781523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.781550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:51.115 [2024-11-18 10:56:16.781558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.119 ms 00:27:51.115 [2024-11-18 10:56:16.781564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.787870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.787894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:51.115 [2024-11-18 10:56:16.787902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:51.115 [2024-11-18 10:56:16.787908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.115 [2024-11-18 10:56:16.787966] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:51.115 [2024-11-18 10:56:16.788074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.115 [2024-11-18 10:56:16.788085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:51.115 [2024-11-18 10:56:16.788092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.109 ms 00:27:51.115 [2024-11-18 10:56:16.788097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.375 [2024-11-18 10:56:17.157380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.375 [2024-11-18 10:56:17.157428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:51.375 [2024-11-18 10:56:17.157439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 368.600 ms 00:27:51.375 [2024-11-18 10:56:17.157446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.375 [2024-11-18 10:56:17.160935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.375 [2024-11-18 10:56:17.160964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:51.375 [2024-11-18 10:56:17.160973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:27:51.375 [2024-11-18 10:56:17.160980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.375 [2024-11-18 10:56:17.161374] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:51.376 [2024-11-18 10:56:17.161393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.376 [2024-11-18 10:56:17.161400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:51.376 [2024-11-18 10:56:17.161408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.384 ms 00:27:51.376 [2024-11-18 10:56:17.161414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.376 [2024-11-18 10:56:17.161531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.376 [2024-11-18 10:56:17.161541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:51.376 [2024-11-18 10:56:17.161548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.376 [2024-11-18 10:56:17.161554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.376 [2024-11-18 10:56:17.161596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 373.628 ms, result 0 00:27:51.376 [2024-11-18 10:56:17.161627] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:51.376 [2024-11-18 10:56:17.161733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.376 [2024-11-18 10:56:17.161747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:51.376 [2024-11-18 10:56:17.161754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.108 ms 00:27:51.376 [2024-11-18 10:56:17.161759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.618360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.618430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:51.949 [2024-11-18 10:56:17.618445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 455.846 ms 00:27:51.949 [2024-11-18 10:56:17.618454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.622959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.622998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:51.949 [2024-11-18 10:56:17.623009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.493 ms 00:27:51.949 [2024-11-18 10:56:17.623017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.623877] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:51.949 [2024-11-18 10:56:17.623913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.623922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:51.949 [2024-11-18 10:56:17.623931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:27:51.949 [2024-11-18 10:56:17.623939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.623971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.623981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:51.949 [2024-11-18 10:56:17.623989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.949 [2024-11-18 10:56:17.623996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.624032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 462.397 ms, result 0 00:27:51.949 [2024-11-18 10:56:17.624073] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:51.949 [2024-11-18 10:56:17.624085] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.949 [2024-11-18 10:56:17.624095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.624103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:51.949 [2024-11-18 10:56:17.624111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 836.140 ms 00:27:51.949 [2024-11-18 10:56:17.624119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.624149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.624158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:51.949 [2024-11-18 10:56:17.624169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.949 [2024-11-18 10:56:17.624177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.635438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:51.949 [2024-11-18 10:56:17.635554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.635565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:51.949 [2024-11-18 10:56:17.635574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.361 ms 00:27:51.949 [2024-11-18 10:56:17.635583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.636296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.636316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:51.949 [2024-11-18 10:56:17.636329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:27:51.949 [2024-11-18 10:56:17.636338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.638571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.638741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:51.949 [2024-11-18 10:56:17.638758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:27:51.949 [2024-11-18 10:56:17.638767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.638819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.638831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:51.949 [2024-11-18 10:56:17.638839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.949 [2024-11-18 10:56:17.638852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.638957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.638968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:51.949 [2024-11-18 10:56:17.638977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:51.949 [2024-11-18 10:56:17.638985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.639005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.639013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:51.949 [2024-11-18 10:56:17.639021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.949 [2024-11-18 10:56:17.639029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.639056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:51.949 [2024-11-18 10:56:17.639067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.639077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:51.949 [2024-11-18 10:56:17.639085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:51.949 [2024-11-18 10:56:17.639092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.639145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.949 [2024-11-18 10:56:17.639156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:51.949 [2024-11-18 10:56:17.639164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:51.949 [2024-11-18 10:56:17.639172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.949 [2024-11-18 10:56:17.640150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1026.786 ms, result 0 00:27:51.949 [2024-11-18 10:56:17.655903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.949 [2024-11-18 10:56:17.671909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:51.949 [2024-11-18 10:56:17.680966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:52.518 Validate MD5 checksum, iteration 1 00:27:52.518 10:56:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.518 10:56:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:52.519 10:56:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.519 [2024-11-18 10:56:18.235557] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:52.519 [2024-11-18 10:56:18.235809] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81129 ] 00:27:52.519 [2024-11-18 10:56:18.387260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.780 [2024-11-18 10:56:18.488904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.164  [2024-11-18T10:56:20.620Z] Copying: 621/1024 [MB] (621 MBps) [2024-11-18T10:56:22.006Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:27:56.122 00:27:56.123 10:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:56.123 10:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.668 Validate MD5 checksum, iteration 2 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=91ead2ff4d7d590e11375509a51fb172 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 91ead2ff4d7d590e11375509a51fb172 != \9\1\e\a\d\2\f\f\4\d\7\d\5\9\0\e\1\1\3\7\5\5\0\9\a\5\1\f\b\1\7\2 ]] 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.668 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.669 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.669 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.669 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:58.669 10:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.669 [2024-11-18 10:56:24.030371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:58.669 [2024-11-18 10:56:24.030489] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81190 ] 00:27:58.669 [2024-11-18 10:56:24.188242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.669 [2024-11-18 10:56:24.281847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.053  [2024-11-18T10:56:26.511Z] Copying: 617/1024 [MB] (617 MBps) [2024-11-18T10:56:31.798Z] Copying: 1024/1024 [MB] (average 596 MBps) 00:28:05.914 00:28:05.914 10:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:05.914 10:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=018627679955e91aff96292c639cf454 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 018627679955e91aff96292c639cf454 != \0\1\8\6\2\7\6\7\9\9\5\5\e\9\1\a\f\f\9\6\2\9\2\c\6\3\9\c\f\4\5\4 ]] 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81094 ]] 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81094 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81094 ']' 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81094 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:07.821 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81094 00:28:07.822 killing process with pid 81094 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81094' 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81094 00:28:07.822 10:56:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81094 00:28:08.081 [2024-11-18 10:56:33.882827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:08.081 [2024-11-18 10:56:33.892514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.892666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:08.081 [2024-11-18 10:56:33.892712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:08.081 [2024-11-18 10:56:33.892731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.892763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:08.081 [2024-11-18 10:56:33.895001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.895090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:08.081 [2024-11-18 10:56:33.895102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.054 ms 00:28:08.081 [2024-11-18 10:56:33.895112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.895298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.895308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:08.081 [2024-11-18 10:56:33.895315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:28:08.081 [2024-11-18 10:56:33.895321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.896558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.896630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:08.081 [2024-11-18 10:56:33.896674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.225 ms 00:28:08.081 [2024-11-18 10:56:33.896692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.897584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.897651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:08.081 [2024-11-18 10:56:33.897696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:28:08.081 [2024-11-18 10:56:33.897716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.905096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.905180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:08.081 [2024-11-18 10:56:33.905239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.343 ms 00:28:08.081 [2024-11-18 10:56:33.905263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.909210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.909288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:08.081 [2024-11-18 10:56:33.909354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.908 ms 00:28:08.081 [2024-11-18 10:56:33.909371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.909428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.909536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:08.081 [2024-11-18 10:56:33.909554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:08.081 [2024-11-18 10:56:33.909569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.916824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.916898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:08.081 [2024-11-18 10:56:33.916937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.229 ms 00:28:08.081 [2024-11-18 10:56:33.916953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.924150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.924233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:08.081 [2024-11-18 10:56:33.924274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.166 ms 00:28:08.081 [2024-11-18 10:56:33.924289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.931461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.931535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:08.081 [2024-11-18 10:56:33.931571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.142 ms 00:28:08.081 [2024-11-18 10:56:33.931587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.081 [2024-11-18 10:56:33.938647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.081 [2024-11-18 10:56:33.938723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:08.082 [2024-11-18 10:56:33.938759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.005 ms 00:28:08.082 [2024-11-18 10:56:33.938775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.082 [2024-11-18 10:56:33.938803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:08.082 [2024-11-18 10:56:33.938824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.082 [2024-11-18 10:56:33.938892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.082 [2024-11-18 10:56:33.938916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:08.082 [2024-11-18 10:56:33.938923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.938998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.939004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:08.082 [2024-11-18 10:56:33.939011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:08.082 [2024-11-18 10:56:33.939017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6089a30d-d95d-4895-84aa-791f092cefb7 00:28:08.082 [2024-11-18 10:56:33.939022] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:08.082 [2024-11-18 10:56:33.939028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:08.082 [2024-11-18 10:56:33.939034] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:08.082 [2024-11-18 10:56:33.939040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:08.082 [2024-11-18 10:56:33.939045] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:08.082 [2024-11-18 10:56:33.939050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:08.082 [2024-11-18 10:56:33.939056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:08.082 [2024-11-18 10:56:33.939060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:08.082 [2024-11-18 10:56:33.939065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:08.082 [2024-11-18 10:56:33.939072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.082 [2024-11-18 10:56:33.939081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:08.082 [2024-11-18 10:56:33.939088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:28:08.082 [2024-11-18 10:56:33.939093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.082 [2024-11-18 10:56:33.948958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.082 [2024-11-18 10:56:33.949034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:08.082 [2024-11-18 10:56:33.949112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.852 ms 00:28:08.082 [2024-11-18 10:56:33.949128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.082 [2024-11-18 10:56:33.949423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.082 [2024-11-18 10:56:33.949477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:08.082 [2024-11-18 10:56:33.949515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:28:08.082 [2024-11-18 10:56:33.949531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:33.982678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:33.982767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:08.343 [2024-11-18 10:56:33.982805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:33.982822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:33.982857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:33.982873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:08.343 [2024-11-18 10:56:33.982888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:33.982901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:33.982971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:33.982991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:08.343 [2024-11-18 10:56:33.983007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:33.983059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:33.983083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:33.983104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:08.343 [2024-11-18 10:56:33.983118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:33.983132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:34.042828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:34.042937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:08.343 [2024-11-18 10:56:34.042972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:34.042989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.343 [2024-11-18 10:56:34.090981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.343 [2024-11-18 10:56:34.091096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:08.343 [2024-11-18 10:56:34.091133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.343 [2024-11-18 10:56:34.091149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.091240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:08.344 [2024-11-18 10:56:34.091256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.091270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.091340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:08.344 [2024-11-18 10:56:34.091360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.091408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.091672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:08.344 [2024-11-18 10:56:34.091692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.091707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.091762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:08.344 [2024-11-18 10:56:34.091776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.091825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.091881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:08.344 [2024-11-18 10:56:34.091896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.091910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.091952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.344 [2024-11-18 10:56:34.092018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:08.344 [2024-11-18 10:56:34.092035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.344 [2024-11-18 10:56:34.092050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.344 [2024-11-18 10:56:34.092149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 199.612 ms, result 0 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:08.916 Remove shared memory files 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80895 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:08.916 ************************************ 00:28:08.916 END TEST ftl_upgrade_shutdown 00:28:08.916 ************************************ 00:28:08.916 00:28:08.916 real 1m23.774s 00:28:08.916 user 1m54.716s 00:28:08.916 sys 0m20.140s 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.916 10:56:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.916 10:56:34 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:09.177 10:56:34 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:09.177 10:56:34 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:09.177 10:56:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.177 10:56:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:09.177 ************************************ 00:28:09.177 START TEST ftl_restore_fast 00:28:09.177 ************************************ 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:09.177 * Looking for test storage... 00:28:09.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.177 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.178 --rc genhtml_branch_coverage=1 00:28:09.178 --rc genhtml_function_coverage=1 00:28:09.178 --rc genhtml_legend=1 00:28:09.178 --rc geninfo_all_blocks=1 00:28:09.178 --rc geninfo_unexecuted_blocks=1 00:28:09.178 00:28:09.178 ' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.178 --rc genhtml_branch_coverage=1 00:28:09.178 --rc genhtml_function_coverage=1 00:28:09.178 --rc genhtml_legend=1 00:28:09.178 --rc geninfo_all_blocks=1 00:28:09.178 --rc geninfo_unexecuted_blocks=1 00:28:09.178 00:28:09.178 ' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.178 --rc genhtml_branch_coverage=1 00:28:09.178 --rc genhtml_function_coverage=1 00:28:09.178 --rc genhtml_legend=1 00:28:09.178 --rc geninfo_all_blocks=1 00:28:09.178 --rc geninfo_unexecuted_blocks=1 00:28:09.178 00:28:09.178 ' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.178 --rc genhtml_branch_coverage=1 00:28:09.178 --rc genhtml_function_coverage=1 00:28:09.178 --rc genhtml_legend=1 00:28:09.178 --rc geninfo_all_blocks=1 00:28:09.178 --rc geninfo_unexecuted_blocks=1 00:28:09.178 00:28:09.178 ' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mj9OW1WLiO 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81380 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81380 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81380 ']' 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:09.178 10:56:34 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.440 [2024-11-18 10:56:35.061852] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:09.440 [2024-11-18 10:56:35.062338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81380 ] 00:28:09.440 [2024-11-18 10:56:35.227024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.701 [2024-11-18 10:56:35.346627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:10.273 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:10.274 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:10.535 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:10.796 { 00:28:10.796 "name": "nvme0n1", 00:28:10.796 "aliases": [ 00:28:10.796 "b4c0d768-aca8-46d8-b102-8e009ee0698a" 00:28:10.796 ], 00:28:10.796 "product_name": "NVMe disk", 00:28:10.796 "block_size": 4096, 00:28:10.796 "num_blocks": 1310720, 00:28:10.796 "uuid": "b4c0d768-aca8-46d8-b102-8e009ee0698a", 00:28:10.796 "numa_id": -1, 00:28:10.796 "assigned_rate_limits": { 00:28:10.796 "rw_ios_per_sec": 0, 00:28:10.796 "rw_mbytes_per_sec": 0, 00:28:10.796 "r_mbytes_per_sec": 0, 00:28:10.796 "w_mbytes_per_sec": 0 00:28:10.796 }, 00:28:10.796 "claimed": true, 00:28:10.796 "claim_type": "read_many_write_one", 00:28:10.796 "zoned": false, 00:28:10.796 "supported_io_types": { 00:28:10.796 "read": true, 00:28:10.796 "write": true, 00:28:10.796 "unmap": true, 00:28:10.796 "flush": true, 00:28:10.796 "reset": true, 00:28:10.796 "nvme_admin": true, 00:28:10.796 "nvme_io": true, 00:28:10.796 "nvme_io_md": false, 00:28:10.796 "write_zeroes": true, 00:28:10.796 "zcopy": false, 00:28:10.796 "get_zone_info": false, 00:28:10.796 "zone_management": false, 00:28:10.796 "zone_append": false, 00:28:10.796 "compare": true, 00:28:10.796 "compare_and_write": false, 00:28:10.796 "abort": true, 00:28:10.796 "seek_hole": false, 00:28:10.796 "seek_data": false, 00:28:10.796 "copy": true, 00:28:10.796 "nvme_iov_md": false 00:28:10.796 }, 00:28:10.796 "driver_specific": { 00:28:10.796 "nvme": [ 00:28:10.796 { 00:28:10.796 "pci_address": "0000:00:11.0", 00:28:10.796 "trid": { 00:28:10.796 "trtype": "PCIe", 00:28:10.796 "traddr": "0000:00:11.0" 00:28:10.796 }, 00:28:10.796 "ctrlr_data": { 00:28:10.796 "cntlid": 0, 00:28:10.796 "vendor_id": "0x1b36", 00:28:10.796 "model_number": "QEMU NVMe Ctrl", 00:28:10.796 "serial_number": "12341", 00:28:10.796 "firmware_revision": "8.0.0", 00:28:10.796 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:10.796 "oacs": { 00:28:10.796 "security": 0, 00:28:10.796 "format": 1, 00:28:10.796 "firmware": 0, 00:28:10.796 "ns_manage": 1 00:28:10.796 }, 00:28:10.796 "multi_ctrlr": false, 00:28:10.796 "ana_reporting": false 00:28:10.796 }, 00:28:10.796 "vs": { 00:28:10.796 "nvme_version": "1.4" 00:28:10.796 }, 00:28:10.796 "ns_data": { 00:28:10.796 "id": 1, 00:28:10.796 "can_share": false 00:28:10.796 } 00:28:10.796 } 00:28:10.796 ], 00:28:10.796 "mp_policy": "active_passive" 00:28:10.796 } 00:28:10.796 } 00:28:10.796 ]' 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:10.796 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:11.058 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c16cdd4f-316c-466c-a546-3b0a7df6faf0 00:28:11.058 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:11.058 10:56:36 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c16cdd4f-316c-466c-a546-3b0a7df6faf0 00:28:11.318 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:11.578 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6ff8e192-5d96-4f06-aad3-875f80900ff5 00:28:11.579 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6ff8e192-5d96-4f06-aad3-875f80900ff5 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:11.838 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:11.838 { 00:28:11.838 "name": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:11.838 "aliases": [ 00:28:11.838 "lvs/nvme0n1p0" 00:28:11.838 ], 00:28:11.838 "product_name": "Logical Volume", 00:28:11.838 "block_size": 4096, 00:28:11.838 "num_blocks": 26476544, 00:28:11.838 "uuid": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:11.838 "assigned_rate_limits": { 00:28:11.838 "rw_ios_per_sec": 0, 00:28:11.838 "rw_mbytes_per_sec": 0, 00:28:11.838 "r_mbytes_per_sec": 0, 00:28:11.838 "w_mbytes_per_sec": 0 00:28:11.838 }, 00:28:11.838 "claimed": false, 00:28:11.838 "zoned": false, 00:28:11.838 "supported_io_types": { 00:28:11.838 "read": true, 00:28:11.838 "write": true, 00:28:11.838 "unmap": true, 00:28:11.838 "flush": false, 00:28:11.838 "reset": true, 00:28:11.838 "nvme_admin": false, 00:28:11.838 "nvme_io": false, 00:28:11.838 "nvme_io_md": false, 00:28:11.838 "write_zeroes": true, 00:28:11.838 "zcopy": false, 00:28:11.838 "get_zone_info": false, 00:28:11.838 "zone_management": false, 00:28:11.838 "zone_append": false, 00:28:11.838 "compare": false, 00:28:11.838 "compare_and_write": false, 00:28:11.838 "abort": false, 00:28:11.838 "seek_hole": true, 00:28:11.838 "seek_data": true, 00:28:11.838 "copy": false, 00:28:11.839 "nvme_iov_md": false 00:28:11.839 }, 00:28:11.839 "driver_specific": { 00:28:11.839 "lvol": { 00:28:11.839 "lvol_store_uuid": "6ff8e192-5d96-4f06-aad3-875f80900ff5", 00:28:11.839 "base_bdev": "nvme0n1", 00:28:11.839 "thin_provision": true, 00:28:11.839 "num_allocated_clusters": 0, 00:28:11.839 "snapshot": false, 00:28:11.839 "clone": false, 00:28:11.839 "esnap_clone": false 00:28:11.839 } 00:28:11.839 } 00:28:11.839 } 00:28:11.839 ]' 00:28:11.839 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:12.099 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:12.100 10:56:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.383 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:12.383 { 00:28:12.384 "name": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:12.384 "aliases": [ 00:28:12.384 "lvs/nvme0n1p0" 00:28:12.384 ], 00:28:12.384 "product_name": "Logical Volume", 00:28:12.384 "block_size": 4096, 00:28:12.384 "num_blocks": 26476544, 00:28:12.384 "uuid": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:12.384 "assigned_rate_limits": { 00:28:12.384 "rw_ios_per_sec": 0, 00:28:12.384 "rw_mbytes_per_sec": 0, 00:28:12.384 "r_mbytes_per_sec": 0, 00:28:12.384 "w_mbytes_per_sec": 0 00:28:12.384 }, 00:28:12.384 "claimed": false, 00:28:12.384 "zoned": false, 00:28:12.384 "supported_io_types": { 00:28:12.384 "read": true, 00:28:12.384 "write": true, 00:28:12.384 "unmap": true, 00:28:12.384 "flush": false, 00:28:12.384 "reset": true, 00:28:12.384 "nvme_admin": false, 00:28:12.384 "nvme_io": false, 00:28:12.384 "nvme_io_md": false, 00:28:12.384 "write_zeroes": true, 00:28:12.384 "zcopy": false, 00:28:12.384 "get_zone_info": false, 00:28:12.384 "zone_management": false, 00:28:12.384 "zone_append": false, 00:28:12.384 "compare": false, 00:28:12.384 "compare_and_write": false, 00:28:12.384 "abort": false, 00:28:12.384 "seek_hole": true, 00:28:12.384 "seek_data": true, 00:28:12.384 "copy": false, 00:28:12.384 "nvme_iov_md": false 00:28:12.384 }, 00:28:12.384 "driver_specific": { 00:28:12.384 "lvol": { 00:28:12.384 "lvol_store_uuid": "6ff8e192-5d96-4f06-aad3-875f80900ff5", 00:28:12.384 "base_bdev": "nvme0n1", 00:28:12.384 "thin_provision": true, 00:28:12.384 "num_allocated_clusters": 0, 00:28:12.384 "snapshot": false, 00:28:12.384 "clone": false, 00:28:12.384 "esnap_clone": false 00:28:12.384 } 00:28:12.384 } 00:28:12.384 } 00:28:12.384 ]' 00:28:12.384 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:12.665 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7348c7f-d8a7-4046-97b7-20d93e7acbbb 00:28:12.930 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:12.930 { 00:28:12.930 "name": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:12.930 "aliases": [ 00:28:12.930 "lvs/nvme0n1p0" 00:28:12.930 ], 00:28:12.930 "product_name": "Logical Volume", 00:28:12.930 "block_size": 4096, 00:28:12.930 "num_blocks": 26476544, 00:28:12.930 "uuid": "d7348c7f-d8a7-4046-97b7-20d93e7acbbb", 00:28:12.930 "assigned_rate_limits": { 00:28:12.930 "rw_ios_per_sec": 0, 00:28:12.930 "rw_mbytes_per_sec": 0, 00:28:12.930 "r_mbytes_per_sec": 0, 00:28:12.930 "w_mbytes_per_sec": 0 00:28:12.930 }, 00:28:12.930 "claimed": false, 00:28:12.930 "zoned": false, 00:28:12.930 "supported_io_types": { 00:28:12.930 "read": true, 00:28:12.930 "write": true, 00:28:12.930 "unmap": true, 00:28:12.930 "flush": false, 00:28:12.930 "reset": true, 00:28:12.930 "nvme_admin": false, 00:28:12.930 "nvme_io": false, 00:28:12.930 "nvme_io_md": false, 00:28:12.930 "write_zeroes": true, 00:28:12.930 "zcopy": false, 00:28:12.930 "get_zone_info": false, 00:28:12.930 "zone_management": false, 00:28:12.930 "zone_append": false, 00:28:12.930 "compare": false, 00:28:12.930 "compare_and_write": false, 00:28:12.930 "abort": false, 00:28:12.930 "seek_hole": true, 00:28:12.930 "seek_data": true, 00:28:12.930 "copy": false, 00:28:12.930 "nvme_iov_md": false 00:28:12.930 }, 00:28:12.930 "driver_specific": { 00:28:12.930 "lvol": { 00:28:12.930 "lvol_store_uuid": "6ff8e192-5d96-4f06-aad3-875f80900ff5", 00:28:12.930 "base_bdev": "nvme0n1", 00:28:12.930 "thin_provision": true, 00:28:12.930 "num_allocated_clusters": 0, 00:28:12.930 "snapshot": false, 00:28:12.930 "clone": false, 00:28:12.930 "esnap_clone": false 00:28:12.931 } 00:28:12.931 } 00:28:12.931 } 00:28:12.931 ]' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d7348c7f-d8a7-4046-97b7-20d93e7acbbb --l2p_dram_limit 10' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:12.931 10:56:38 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d7348c7f-d8a7-4046-97b7-20d93e7acbbb --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:13.192 [2024-11-18 10:56:38.947366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.947404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.192 [2024-11-18 10:56:38.947417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:13.192 [2024-11-18 10:56:38.947424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.947467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.947474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.192 [2024-11-18 10:56:38.947482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:13.192 [2024-11-18 10:56:38.947488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.947511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.192 [2024-11-18 10:56:38.948070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.192 [2024-11-18 10:56:38.948086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.948093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.192 [2024-11-18 10:56:38.948100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:28:13.192 [2024-11-18 10:56:38.948106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.948134] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:28:13.192 [2024-11-18 10:56:38.949200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.949241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:13.192 [2024-11-18 10:56:38.949249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:13.192 [2024-11-18 10:56:38.949257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.953975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.954004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.192 [2024-11-18 10:56:38.954013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:28:13.192 [2024-11-18 10:56:38.954020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.954088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.954096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.192 [2024-11-18 10:56:38.954103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:13.192 [2024-11-18 10:56:38.954112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.954143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.954152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:13.192 [2024-11-18 10:56:38.954158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:13.192 [2024-11-18 10:56:38.954167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.954183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:13.192 [2024-11-18 10:56:38.957033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.957058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.192 [2024-11-18 10:56:38.957068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:28:13.192 [2024-11-18 10:56:38.957074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.957100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.192 [2024-11-18 10:56:38.957106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:13.192 [2024-11-18 10:56:38.957114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:13.192 [2024-11-18 10:56:38.957119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.192 [2024-11-18 10:56:38.957141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:13.192 [2024-11-18 10:56:38.957253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:13.193 [2024-11-18 10:56:38.957265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:13.193 [2024-11-18 10:56:38.957274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:13.193 [2024-11-18 10:56:38.957284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:13.193 [2024-11-18 10:56:38.957304] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:13.193 [2024-11-18 10:56:38.957314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:13.193 [2024-11-18 10:56:38.957319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:13.193 [2024-11-18 10:56:38.957326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.193 [2024-11-18 10:56:38.957332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:13.193 [2024-11-18 10:56:38.957340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:28:13.193 [2024-11-18 10:56:38.957352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.193 [2024-11-18 10:56:38.957418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.193 [2024-11-18 10:56:38.957425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:13.193 [2024-11-18 10:56:38.957432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:13.193 [2024-11-18 10:56:38.957438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.193 [2024-11-18 10:56:38.957524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:13.193 [2024-11-18 10:56:38.957531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:13.193 [2024-11-18 10:56:38.957539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:13.193 [2024-11-18 10:56:38.957560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:13.193 [2024-11-18 10:56:38.957581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.193 [2024-11-18 10:56:38.957592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:13.193 [2024-11-18 10:56:38.957597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:13.193 [2024-11-18 10:56:38.957604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.193 [2024-11-18 10:56:38.957609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:13.193 [2024-11-18 10:56:38.957616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:13.193 [2024-11-18 10:56:38.957621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:13.193 [2024-11-18 10:56:38.957634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:13.193 [2024-11-18 10:56:38.957652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:13.193 [2024-11-18 10:56:38.957669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:13.193 [2024-11-18 10:56:38.957686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:13.193 [2024-11-18 10:56:38.957702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:13.193 [2024-11-18 10:56:38.957721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.193 [2024-11-18 10:56:38.957732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:13.193 [2024-11-18 10:56:38.957737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:13.193 [2024-11-18 10:56:38.957743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.193 [2024-11-18 10:56:38.957748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:13.193 [2024-11-18 10:56:38.957755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:13.193 [2024-11-18 10:56:38.957760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:13.193 [2024-11-18 10:56:38.957771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:13.193 [2024-11-18 10:56:38.957777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957782] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:13.193 [2024-11-18 10:56:38.957788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:13.193 [2024-11-18 10:56:38.957794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.193 [2024-11-18 10:56:38.957809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:13.193 [2024-11-18 10:56:38.957817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:13.193 [2024-11-18 10:56:38.957822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:13.193 [2024-11-18 10:56:38.957828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:13.193 [2024-11-18 10:56:38.957833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:13.193 [2024-11-18 10:56:38.957839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:13.193 [2024-11-18 10:56:38.957847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:13.193 [2024-11-18 10:56:38.957855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:13.193 [2024-11-18 10:56:38.957871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:13.193 [2024-11-18 10:56:38.957876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:13.193 [2024-11-18 10:56:38.957882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:13.193 [2024-11-18 10:56:38.957888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:13.193 [2024-11-18 10:56:38.957894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:13.193 [2024-11-18 10:56:38.957899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:13.193 [2024-11-18 10:56:38.957906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:13.193 [2024-11-18 10:56:38.957911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:13.193 [2024-11-18 10:56:38.957919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:13.193 [2024-11-18 10:56:38.957950] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:13.193 [2024-11-18 10:56:38.957958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.193 [2024-11-18 10:56:38.957971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:13.194 [2024-11-18 10:56:38.957977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:13.194 [2024-11-18 10:56:38.957984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:13.194 [2024-11-18 10:56:38.957990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.194 [2024-11-18 10:56:38.957998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:13.194 [2024-11-18 10:56:38.958004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:28:13.194 [2024-11-18 10:56:38.958012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.194 [2024-11-18 10:56:38.958040] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:13.194 [2024-11-18 10:56:38.958056] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:16.496 [2024-11-18 10:56:42.373753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.496 [2024-11-18 10:56:42.373837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:16.496 [2024-11-18 10:56:42.373854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3415.697 ms 00:28:16.496 [2024-11-18 10:56:42.373866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.406111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.406185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.757 [2024-11-18 10:56:42.406201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.004 ms 00:28:16.757 [2024-11-18 10:56:42.406236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.406397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.406419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:16.757 [2024-11-18 10:56:42.406435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:16.757 [2024-11-18 10:56:42.406453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.442320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.442380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.757 [2024-11-18 10:56:42.442393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.782 ms 00:28:16.757 [2024-11-18 10:56:42.442404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.442441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.442457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.757 [2024-11-18 10:56:42.442466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:16.757 [2024-11-18 10:56:42.442476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.443041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.443072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.757 [2024-11-18 10:56:42.443083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:28:16.757 [2024-11-18 10:56:42.443093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.443242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.443257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.757 [2024-11-18 10:56:42.443270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:28:16.757 [2024-11-18 10:56:42.443283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.461045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.461280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.757 [2024-11-18 10:56:42.461302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:28:16.757 [2024-11-18 10:56:42.461313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.474674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:16.757 [2024-11-18 10:56:42.478582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.478629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:16.757 [2024-11-18 10:56:42.478643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.170 ms 00:28:16.757 [2024-11-18 10:56:42.478652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.579914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.580141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:16.757 [2024-11-18 10:56:42.580175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.223 ms 00:28:16.757 [2024-11-18 10:56:42.580185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.580439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.580458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:16.757 [2024-11-18 10:56:42.580475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:28:16.757 [2024-11-18 10:56:42.580484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.607847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.607906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:16.757 [2024-11-18 10:56:42.607925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.298 ms 00:28:16.757 [2024-11-18 10:56:42.607933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.633994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.634049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:16.757 [2024-11-18 10:56:42.634067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:28:16.757 [2024-11-18 10:56:42.634075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.757 [2024-11-18 10:56:42.634803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.757 [2024-11-18 10:56:42.634847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:16.757 [2024-11-18 10:56:42.634861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:28:16.757 [2024-11-18 10:56:42.634869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.720191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.720255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:17.018 [2024-11-18 10:56:42.720276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.245 ms 00:28:17.018 [2024-11-18 10:56:42.720285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.748400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.748479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:17.018 [2024-11-18 10:56:42.748496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.004 ms 00:28:17.018 [2024-11-18 10:56:42.748504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.775562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.775614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:17.018 [2024-11-18 10:56:42.775630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.998 ms 00:28:17.018 [2024-11-18 10:56:42.775638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.802629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.802685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:17.018 [2024-11-18 10:56:42.802704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.930 ms 00:28:17.018 [2024-11-18 10:56:42.802712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.802774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.802784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:17.018 [2024-11-18 10:56:42.802799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:17.018 [2024-11-18 10:56:42.802808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.802922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.018 [2024-11-18 10:56:42.802935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:17.018 [2024-11-18 10:56:42.802952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:17.018 [2024-11-18 10:56:42.802961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.018 [2024-11-18 10:56:42.804376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3856.458 ms, result 0 00:28:17.018 { 00:28:17.018 "name": "ftl0", 00:28:17.018 "uuid": "75642e4f-01b4-4066-b9aa-775e5d86fcd1" 00:28:17.018 } 00:28:17.018 10:56:42 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:17.018 10:56:42 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:17.278 10:56:43 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:17.278 10:56:43 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:17.541 [2024-11-18 10:56:43.231537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.231588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:17.541 [2024-11-18 10:56:43.231603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:17.541 [2024-11-18 10:56:43.231618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.231642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.541 [2024-11-18 10:56:43.234336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.234374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:17.541 [2024-11-18 10:56:43.234386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:28:17.541 [2024-11-18 10:56:43.234394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.234667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.234678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:17.541 [2024-11-18 10:56:43.234691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:28:17.541 [2024-11-18 10:56:43.234698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.237940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.238048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:17.541 [2024-11-18 10:56:43.238065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:28:17.541 [2024-11-18 10:56:43.238072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.244254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.244350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:17.541 [2024-11-18 10:56:43.244370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:28:17.541 [2024-11-18 10:56:43.244378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.267929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.268041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:17.541 [2024-11-18 10:56:43.268060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.468 ms 00:28:17.541 [2024-11-18 10:56:43.268068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.283086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.283200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:17.541 [2024-11-18 10:56:43.283235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.980 ms 00:28:17.541 [2024-11-18 10:56:43.283242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.283386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.283397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:17.541 [2024-11-18 10:56:43.283407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:17.541 [2024-11-18 10:56:43.283414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.306726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.306829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:17.541 [2024-11-18 10:56:43.306848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.292 ms 00:28:17.541 [2024-11-18 10:56:43.306855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.330157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.330281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:17.541 [2024-11-18 10:56:43.330300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.269 ms 00:28:17.541 [2024-11-18 10:56:43.330307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.352610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.352641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:17.541 [2024-11-18 10:56:43.352653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 00:28:17.541 [2024-11-18 10:56:43.352660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.375475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.541 [2024-11-18 10:56:43.375505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:17.541 [2024-11-18 10:56:43.375517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.745 ms 00:28:17.541 [2024-11-18 10:56:43.375524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.541 [2024-11-18 10:56:43.375559] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:17.541 [2024-11-18 10:56:43.375572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:17.541 [2024-11-18 10:56:43.375948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.375997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:17.542 [2024-11-18 10:56:43.376464] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:17.542 [2024-11-18 10:56:43.376475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:28:17.542 [2024-11-18 10:56:43.376483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:17.542 [2024-11-18 10:56:43.376493] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:17.542 [2024-11-18 10:56:43.376501] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:17.542 [2024-11-18 10:56:43.376512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:17.542 [2024-11-18 10:56:43.376519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:17.542 [2024-11-18 10:56:43.376528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:17.542 [2024-11-18 10:56:43.376535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:17.542 [2024-11-18 10:56:43.376543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:17.542 [2024-11-18 10:56:43.376556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:17.542 [2024-11-18 10:56:43.376566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.542 [2024-11-18 10:56:43.376573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:17.542 [2024-11-18 10:56:43.376583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:28:17.542 [2024-11-18 10:56:43.376590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.542 [2024-11-18 10:56:43.389035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.542 [2024-11-18 10:56:43.389063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:17.542 [2024-11-18 10:56:43.389074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.410 ms 00:28:17.542 [2024-11-18 10:56:43.389082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.542 [2024-11-18 10:56:43.389456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.542 [2024-11-18 10:56:43.389474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:17.542 [2024-11-18 10:56:43.389485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:28:17.542 [2024-11-18 10:56:43.389494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.431475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.431508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.803 [2024-11-18 10:56:43.431520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.431528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.431586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.431594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.803 [2024-11-18 10:56:43.431604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.431613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.431689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.431699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.803 [2024-11-18 10:56:43.431709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.431716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.431737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.431744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.803 [2024-11-18 10:56:43.431754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.431761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.509603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.509648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.803 [2024-11-18 10:56:43.509661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.509669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.574501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.574547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.803 [2024-11-18 10:56:43.574561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.574571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.574649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.574659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.803 [2024-11-18 10:56:43.574669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.574677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.574745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.574755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.803 [2024-11-18 10:56:43.574766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.574773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.574869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.574879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.803 [2024-11-18 10:56:43.574889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.574896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.574937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.574947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:17.803 [2024-11-18 10:56:43.574957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.574965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.575006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.575016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.803 [2024-11-18 10:56:43.575026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.803 [2024-11-18 10:56:43.575033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.803 [2024-11-18 10:56:43.575082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.803 [2024-11-18 10:56:43.575093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.804 [2024-11-18 10:56:43.575102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.804 [2024-11-18 10:56:43.575111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.804 [2024-11-18 10:56:43.575269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.670 ms, result 0 00:28:17.804 true 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81380 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81380 ']' 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81380 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81380 00:28:17.804 killing process with pid 81380 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81380' 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81380 00:28:17.804 10:56:43 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81380 00:28:24.390 10:56:49 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:27.694 262144+0 records in 00:28:27.694 262144+0 records out 00:28:27.694 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.64526 s, 295 MB/s 00:28:27.694 10:56:52 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:29.611 10:56:55 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:29.611 [2024-11-18 10:56:55.076692] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:29.611 [2024-11-18 10:56:55.076928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81605 ] 00:28:29.611 [2024-11-18 10:56:55.239421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.611 [2024-11-18 10:56:55.353533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.872 [2024-11-18 10:56:55.642680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:29.872 [2024-11-18 10:56:55.642769] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.135 [2024-11-18 10:56:55.804133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.135 [2024-11-18 10:56:55.804193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.135 [2024-11-18 10:56:55.804234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:30.136 [2024-11-18 10:56:55.804244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.804304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.804316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.136 [2024-11-18 10:56:55.804329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:30.136 [2024-11-18 10:56:55.804336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.804358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.136 [2024-11-18 10:56:55.805122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.136 [2024-11-18 10:56:55.805157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.805165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.136 [2024-11-18 10:56:55.805175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:28:30.136 [2024-11-18 10:56:55.805183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.807012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:30.136 [2024-11-18 10:56:55.821701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.821905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:30.136 [2024-11-18 10:56:55.822375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.690 ms 00:28:30.136 [2024-11-18 10:56:55.822449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.822625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.822663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:30.136 [2024-11-18 10:56:55.822687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:30.136 [2024-11-18 10:56:55.822707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.831449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.831624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.136 [2024-11-18 10:56:55.831689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.628 ms 00:28:30.136 [2024-11-18 10:56:55.831714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.831822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.831851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.136 [2024-11-18 10:56:55.831873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:30.136 [2024-11-18 10:56:55.831893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.832015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.832049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.136 [2024-11-18 10:56:55.832072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:30.136 [2024-11-18 10:56:55.832092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.832130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.136 [2024-11-18 10:56:55.836191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.836484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.136 [2024-11-18 10:56:55.836572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.066 ms 00:28:30.136 [2024-11-18 10:56:55.836605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.836667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.836690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.136 [2024-11-18 10:56:55.836711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:30.136 [2024-11-18 10:56:55.836731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.836796] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:30.136 [2024-11-18 10:56:55.836899] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:30.136 [2024-11-18 10:56:55.836970] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:30.136 [2024-11-18 10:56:55.837017] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:30.136 [2024-11-18 10:56:55.837152] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.136 [2024-11-18 10:56:55.837391] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.136 [2024-11-18 10:56:55.837436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:30.136 [2024-11-18 10:56:55.837475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.136 [2024-11-18 10:56:55.837585] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.136 [2024-11-18 10:56:55.837624] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.136 [2024-11-18 10:56:55.837644] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.136 [2024-11-18 10:56:55.837664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.136 [2024-11-18 10:56:55.837683] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.136 [2024-11-18 10:56:55.837750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.837775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.136 [2024-11-18 10:56:55.837916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:28:30.136 [2024-11-18 10:56:55.837928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.838032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.136 [2024-11-18 10:56:55.838046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.136 [2024-11-18 10:56:55.838056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:30.136 [2024-11-18 10:56:55.838064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.136 [2024-11-18 10:56:55.838173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.136 [2024-11-18 10:56:55.838192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.136 [2024-11-18 10:56:55.838218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.136 [2024-11-18 10:56:55.838244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.136 [2024-11-18 10:56:55.838268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.136 [2024-11-18 10:56:55.838282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.136 [2024-11-18 10:56:55.838289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.136 [2024-11-18 10:56:55.838295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.136 [2024-11-18 10:56:55.838303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.136 [2024-11-18 10:56:55.838310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.136 [2024-11-18 10:56:55.838324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.136 [2024-11-18 10:56:55.838338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.136 [2024-11-18 10:56:55.838358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.136 [2024-11-18 10:56:55.838378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.136 [2024-11-18 10:56:55.838398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.136 [2024-11-18 10:56:55.838418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.136 [2024-11-18 10:56:55.838433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.136 [2024-11-18 10:56:55.838440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.136 [2024-11-18 10:56:55.838446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.136 [2024-11-18 10:56:55.838453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.136 [2024-11-18 10:56:55.838459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.136 [2024-11-18 10:56:55.838466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.136 [2024-11-18 10:56:55.838473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.136 [2024-11-18 10:56:55.838480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.137 [2024-11-18 10:56:55.838487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.137 [2024-11-18 10:56:55.838494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.137 [2024-11-18 10:56:55.838501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.137 [2024-11-18 10:56:55.838508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.137 [2024-11-18 10:56:55.838521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.137 [2024-11-18 10:56:55.838529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.137 [2024-11-18 10:56:55.838541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.137 [2024-11-18 10:56:55.838549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.137 [2024-11-18 10:56:55.838556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.137 [2024-11-18 10:56:55.838564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.137 [2024-11-18 10:56:55.838571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.137 [2024-11-18 10:56:55.838577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.137 [2024-11-18 10:56:55.838584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.137 [2024-11-18 10:56:55.838591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.137 [2024-11-18 10:56:55.838600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.137 [2024-11-18 10:56:55.838610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.137 [2024-11-18 10:56:55.838627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.137 [2024-11-18 10:56:55.838635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.137 [2024-11-18 10:56:55.838642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.137 [2024-11-18 10:56:55.838649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.137 [2024-11-18 10:56:55.838656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.137 [2024-11-18 10:56:55.838663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.137 [2024-11-18 10:56:55.838670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.137 [2024-11-18 10:56:55.838677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.137 [2024-11-18 10:56:55.838685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.137 [2024-11-18 10:56:55.838722] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.137 [2024-11-18 10:56:55.838734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.137 [2024-11-18 10:56:55.838749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.137 [2024-11-18 10:56:55.838756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.137 [2024-11-18 10:56:55.838763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.137 [2024-11-18 10:56:55.838770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.838778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.137 [2024-11-18 10:56:55.838787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:28:30.137 [2024-11-18 10:56:55.838795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.871887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.871938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.137 [2024-11-18 10:56:55.871952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.045 ms 00:28:30.137 [2024-11-18 10:56:55.871961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.872060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.872069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:30.137 [2024-11-18 10:56:55.872078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:30.137 [2024-11-18 10:56:55.872086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.921860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.921911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.137 [2024-11-18 10:56:55.921925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.707 ms 00:28:30.137 [2024-11-18 10:56:55.921933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.921984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.921995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.137 [2024-11-18 10:56:55.922005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.137 [2024-11-18 10:56:55.922017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.922615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.922641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.137 [2024-11-18 10:56:55.922652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:28:30.137 [2024-11-18 10:56:55.922661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.922821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.922832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.137 [2024-11-18 10:56:55.922841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:28:30.137 [2024-11-18 10:56:55.922854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.939314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.939513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.137 [2024-11-18 10:56:55.939538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.439 ms 00:28:30.137 [2024-11-18 10:56:55.939547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.954046] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:30.137 [2024-11-18 10:56:55.954101] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:30.137 [2024-11-18 10:56:55.954116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.954125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:30.137 [2024-11-18 10:56:55.954134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:28:30.137 [2024-11-18 10:56:55.954143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.980493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.980549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:30.137 [2024-11-18 10:56:55.980571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.292 ms 00:28:30.137 [2024-11-18 10:56:55.980580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:55.993879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:55.994074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:30.137 [2024-11-18 10:56:55.994097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.243 ms 00:28:30.137 [2024-11-18 10:56:55.994105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:56.007158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:56.007222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:30.137 [2024-11-18 10:56:56.007236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:28:30.137 [2024-11-18 10:56:56.007244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.137 [2024-11-18 10:56:56.007904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.137 [2024-11-18 10:56:56.007940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:30.137 [2024-11-18 10:56:56.007951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:28:30.137 [2024-11-18 10:56:56.007960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.074891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.074980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:30.400 [2024-11-18 10:56:56.074997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.906 ms 00:28:30.400 [2024-11-18 10:56:56.075014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.086381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:30.400 [2024-11-18 10:56:56.089600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.089647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:30.400 [2024-11-18 10:56:56.089660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.522 ms 00:28:30.400 [2024-11-18 10:56:56.089668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.089758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.089770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:30.400 [2024-11-18 10:56:56.089780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:30.400 [2024-11-18 10:56:56.089788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.089865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.089876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:30.400 [2024-11-18 10:56:56.089885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:30.400 [2024-11-18 10:56:56.089894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.089915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.089924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:30.400 [2024-11-18 10:56:56.089932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.400 [2024-11-18 10:56:56.089940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.089977] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:30.400 [2024-11-18 10:56:56.089988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.090000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:30.400 [2024-11-18 10:56:56.090009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:30.400 [2024-11-18 10:56:56.090017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.116450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.116642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:30.400 [2024-11-18 10:56:56.116664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.414 ms 00:28:30.400 [2024-11-18 10:56:56.116675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.116765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.400 [2024-11-18 10:56:56.116776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:30.400 [2024-11-18 10:56:56.116785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:30.400 [2024-11-18 10:56:56.116793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.400 [2024-11-18 10:56:56.118187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.556 ms, result 0 00:28:31.343  [2024-11-18T10:56:58.171Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-18T10:56:59.556Z] Copying: 42/1024 [MB] (21 MBps) [2024-11-18T10:57:00.501Z] Copying: 72/1024 [MB] (29 MBps) [2024-11-18T10:57:01.445Z] Copying: 92/1024 [MB] (19 MBps) [2024-11-18T10:57:02.390Z] Copying: 110/1024 [MB] (17 MBps) [2024-11-18T10:57:03.335Z] Copying: 123/1024 [MB] (13 MBps) [2024-11-18T10:57:04.280Z] Copying: 140/1024 [MB] (17 MBps) [2024-11-18T10:57:05.226Z] Copying: 156/1024 [MB] (16 MBps) [2024-11-18T10:57:06.170Z] Copying: 173/1024 [MB] (16 MBps) [2024-11-18T10:57:07.558Z] Copying: 187/1024 [MB] (14 MBps) [2024-11-18T10:57:08.501Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-18T10:57:09.443Z] Copying: 220/1024 [MB] (20 MBps) [2024-11-18T10:57:10.409Z] Copying: 245/1024 [MB] (24 MBps) [2024-11-18T10:57:11.360Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-18T10:57:12.304Z] Copying: 270/1024 [MB] (13 MBps) [2024-11-18T10:57:13.246Z] Copying: 291/1024 [MB] (20 MBps) [2024-11-18T10:57:14.190Z] Copying: 310/1024 [MB] (19 MBps) [2024-11-18T10:57:15.132Z] Copying: 324/1024 [MB] (14 MBps) [2024-11-18T10:57:16.515Z] Copying: 357/1024 [MB] (32 MBps) [2024-11-18T10:57:17.455Z] Copying: 386/1024 [MB] (28 MBps) [2024-11-18T10:57:18.398Z] Copying: 438/1024 [MB] (52 MBps) [2024-11-18T10:57:19.341Z] Copying: 453/1024 [MB] (14 MBps) [2024-11-18T10:57:20.286Z] Copying: 483/1024 [MB] (29 MBps) [2024-11-18T10:57:21.229Z] Copying: 509/1024 [MB] (25 MBps) [2024-11-18T10:57:22.171Z] Copying: 531/1024 [MB] (22 MBps) [2024-11-18T10:57:23.561Z] Copying: 571/1024 [MB] (40 MBps) [2024-11-18T10:57:24.133Z] Copying: 589/1024 [MB] (17 MBps) [2024-11-18T10:57:25.520Z] Copying: 614/1024 [MB] (24 MBps) [2024-11-18T10:57:26.464Z] Copying: 640/1024 [MB] (25 MBps) [2024-11-18T10:57:27.409Z] Copying: 656/1024 [MB] (16 MBps) [2024-11-18T10:57:28.351Z] Copying: 676/1024 [MB] (19 MBps) [2024-11-18T10:57:29.295Z] Copying: 706/1024 [MB] (30 MBps) [2024-11-18T10:57:30.240Z] Copying: 734/1024 [MB] (27 MBps) [2024-11-18T10:57:31.185Z] Copying: 747/1024 [MB] (13 MBps) [2024-11-18T10:57:32.570Z] Copying: 767/1024 [MB] (19 MBps) [2024-11-18T10:57:33.143Z] Copying: 787/1024 [MB] (19 MBps) [2024-11-18T10:57:34.529Z] Copying: 806/1024 [MB] (19 MBps) [2024-11-18T10:57:35.472Z] Copying: 824/1024 [MB] (17 MBps) [2024-11-18T10:57:36.416Z] Copying: 853/1024 [MB] (28 MBps) [2024-11-18T10:57:37.361Z] Copying: 876/1024 [MB] (23 MBps) [2024-11-18T10:57:38.305Z] Copying: 900/1024 [MB] (24 MBps) [2024-11-18T10:57:39.255Z] Copying: 922/1024 [MB] (21 MBps) [2024-11-18T10:57:40.196Z] Copying: 943/1024 [MB] (20 MBps) [2024-11-18T10:57:41.138Z] Copying: 959/1024 [MB] (16 MBps) [2024-11-18T10:57:42.526Z] Copying: 975/1024 [MB] (15 MBps) [2024-11-18T10:57:43.494Z] Copying: 987/1024 [MB] (12 MBps) [2024-11-18T10:57:44.465Z] Copying: 1003/1024 [MB] (15 MBps) [2024-11-18T10:57:44.465Z] Copying: 1020/1024 [MB] (17 MBps) [2024-11-18T10:57:44.465Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-18 10:57:44.389126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.581 [2024-11-18 10:57:44.389183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:18.581 [2024-11-18 10:57:44.389199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:18.581 [2024-11-18 10:57:44.389223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.581 [2024-11-18 10:57:44.389250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:18.581 [2024-11-18 10:57:44.392277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.581 [2024-11-18 10:57:44.392323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:18.581 [2024-11-18 10:57:44.392334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:29:18.581 [2024-11-18 10:57:44.392344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.581 [2024-11-18 10:57:44.394367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.581 [2024-11-18 10:57:44.394412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:18.581 [2024-11-18 10:57:44.394423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:29:18.581 [2024-11-18 10:57:44.394432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.581 [2024-11-18 10:57:44.394459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.581 [2024-11-18 10:57:44.394468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:18.581 [2024-11-18 10:57:44.394477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:18.581 [2024-11-18 10:57:44.394485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.581 [2024-11-18 10:57:44.394543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.581 [2024-11-18 10:57:44.394554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:18.581 [2024-11-18 10:57:44.394562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:18.581 [2024-11-18 10:57:44.394570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.581 [2024-11-18 10:57:44.394583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:18.581 [2024-11-18 10:57:44.394597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:18.581 [2024-11-18 10:57:44.394996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:18.582 [2024-11-18 10:57:44.395425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:18.582 [2024-11-18 10:57:44.395433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:29:18.582 [2024-11-18 10:57:44.395441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:18.582 [2024-11-18 10:57:44.395449] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:18.582 [2024-11-18 10:57:44.395457] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:18.582 [2024-11-18 10:57:44.395470] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:18.582 [2024-11-18 10:57:44.395481] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:18.582 [2024-11-18 10:57:44.395489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:18.582 [2024-11-18 10:57:44.395497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:18.582 [2024-11-18 10:57:44.395503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:18.582 [2024-11-18 10:57:44.395509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:18.582 [2024-11-18 10:57:44.395516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.582 [2024-11-18 10:57:44.395524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:18.582 [2024-11-18 10:57:44.395532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:29:18.582 [2024-11-18 10:57:44.395540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.408920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.582 [2024-11-18 10:57:44.408965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:18.582 [2024-11-18 10:57:44.408983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.363 ms 00:29:18.582 [2024-11-18 10:57:44.408991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.409412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.582 [2024-11-18 10:57:44.409436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:18.582 [2024-11-18 10:57:44.409446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:29:18.582 [2024-11-18 10:57:44.409454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.445830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.582 [2024-11-18 10:57:44.445889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:18.582 [2024-11-18 10:57:44.445901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.582 [2024-11-18 10:57:44.445909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.445979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.582 [2024-11-18 10:57:44.445987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:18.582 [2024-11-18 10:57:44.445996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.582 [2024-11-18 10:57:44.446004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.446075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.582 [2024-11-18 10:57:44.446086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:18.582 [2024-11-18 10:57:44.446099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.582 [2024-11-18 10:57:44.446107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.582 [2024-11-18 10:57:44.446123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.582 [2024-11-18 10:57:44.446132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:18.582 [2024-11-18 10:57:44.446140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.582 [2024-11-18 10:57:44.446151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.530045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.530281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:18.844 [2024-11-18 10:57:44.530313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.530321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:18.844 [2024-11-18 10:57:44.599287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:18.844 [2024-11-18 10:57:44.599380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:18.844 [2024-11-18 10:57:44.599471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:18.844 [2024-11-18 10:57:44.599579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:18.844 [2024-11-18 10:57:44.599644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:18.844 [2024-11-18 10:57:44.599709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:18.844 [2024-11-18 10:57:44.599785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:18.844 [2024-11-18 10:57:44.599794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:18.844 [2024-11-18 10:57:44.599802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.844 [2024-11-18 10:57:44.599934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.769 ms, result 0 00:29:19.787 00:29:19.787 00:29:19.787 10:57:45 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:20.049 [2024-11-18 10:57:45.694164] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:20.049 [2024-11-18 10:57:45.694347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82111 ] 00:29:20.049 [2024-11-18 10:57:45.856101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.310 [2024-11-18 10:57:45.979584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.572 [2024-11-18 10:57:46.267946] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:20.572 [2024-11-18 10:57:46.268029] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:20.572 [2024-11-18 10:57:46.429005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.572 [2024-11-18 10:57:46.429066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:20.572 [2024-11-18 10:57:46.429088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:20.572 [2024-11-18 10:57:46.429096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.572 [2024-11-18 10:57:46.429151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.572 [2024-11-18 10:57:46.429162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.572 [2024-11-18 10:57:46.429174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:20.572 [2024-11-18 10:57:46.429181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.572 [2024-11-18 10:57:46.429202] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:20.572 [2024-11-18 10:57:46.429961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:20.572 [2024-11-18 10:57:46.429987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.572 [2024-11-18 10:57:46.429995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.572 [2024-11-18 10:57:46.430004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:29:20.572 [2024-11-18 10:57:46.430012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.572 [2024-11-18 10:57:46.430746] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:20.573 [2024-11-18 10:57:46.430818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.430829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:20.573 [2024-11-18 10:57:46.430846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:20.573 [2024-11-18 10:57:46.430854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.430911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.430921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:20.573 [2024-11-18 10:57:46.430929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:20.573 [2024-11-18 10:57:46.430937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.431308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.431321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.573 [2024-11-18 10:57:46.431329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:29:20.573 [2024-11-18 10:57:46.431337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.431408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.431417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.573 [2024-11-18 10:57:46.431427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:20.573 [2024-11-18 10:57:46.431435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.431459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.431469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:20.573 [2024-11-18 10:57:46.431480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:20.573 [2024-11-18 10:57:46.431487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.431506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:20.573 [2024-11-18 10:57:46.435842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.435883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.573 [2024-11-18 10:57:46.435894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:29:20.573 [2024-11-18 10:57:46.435901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.435941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.435950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:20.573 [2024-11-18 10:57:46.435958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:20.573 [2024-11-18 10:57:46.435966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.436025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:20.573 [2024-11-18 10:57:46.436050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:20.573 [2024-11-18 10:57:46.436089] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:20.573 [2024-11-18 10:57:46.436105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:20.573 [2024-11-18 10:57:46.436229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:20.573 [2024-11-18 10:57:46.436241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:20.573 [2024-11-18 10:57:46.436252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:20.573 [2024-11-18 10:57:46.436263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436284] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:20.573 [2024-11-18 10:57:46.436292] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:20.573 [2024-11-18 10:57:46.436300] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:20.573 [2024-11-18 10:57:46.436307] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:20.573 [2024-11-18 10:57:46.436315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.436322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:20.573 [2024-11-18 10:57:46.436330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:29:20.573 [2024-11-18 10:57:46.436337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.436446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.573 [2024-11-18 10:57:46.436455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:20.573 [2024-11-18 10:57:46.436463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:20.573 [2024-11-18 10:57:46.436473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.573 [2024-11-18 10:57:46.436577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:20.573 [2024-11-18 10:57:46.436588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:20.573 [2024-11-18 10:57:46.436597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:20.573 [2024-11-18 10:57:46.436620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:20.573 [2024-11-18 10:57:46.436644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.573 [2024-11-18 10:57:46.436658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:20.573 [2024-11-18 10:57:46.436668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:20.573 [2024-11-18 10:57:46.436675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.573 [2024-11-18 10:57:46.436682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:20.573 [2024-11-18 10:57:46.436690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:20.573 [2024-11-18 10:57:46.436696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:20.573 [2024-11-18 10:57:46.436717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:20.573 [2024-11-18 10:57:46.436738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:20.573 [2024-11-18 10:57:46.436758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:20.573 [2024-11-18 10:57:46.436778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:20.573 [2024-11-18 10:57:46.436798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.573 [2024-11-18 10:57:46.436811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:20.573 [2024-11-18 10:57:46.436819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.573 [2024-11-18 10:57:46.436832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:20.573 [2024-11-18 10:57:46.436839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:20.573 [2024-11-18 10:57:46.436845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.573 [2024-11-18 10:57:46.436853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:20.573 [2024-11-18 10:57:46.436859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:20.573 [2024-11-18 10:57:46.436865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.573 [2024-11-18 10:57:46.436871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:20.573 [2024-11-18 10:57:46.436879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:20.574 [2024-11-18 10:57:46.436886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.574 [2024-11-18 10:57:46.436895] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:20.574 [2024-11-18 10:57:46.436903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:20.574 [2024-11-18 10:57:46.436911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.574 [2024-11-18 10:57:46.436919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.574 [2024-11-18 10:57:46.436930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:20.574 [2024-11-18 10:57:46.436937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:20.574 [2024-11-18 10:57:46.436944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:20.574 [2024-11-18 10:57:46.436950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:20.574 [2024-11-18 10:57:46.436957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:20.574 [2024-11-18 10:57:46.436964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:20.574 [2024-11-18 10:57:46.436973] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:20.574 [2024-11-18 10:57:46.436983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.436991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:20.574 [2024-11-18 10:57:46.436999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:20.574 [2024-11-18 10:57:46.437006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:20.574 [2024-11-18 10:57:46.437014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:20.574 [2024-11-18 10:57:46.437021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:20.574 [2024-11-18 10:57:46.437027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:20.574 [2024-11-18 10:57:46.437034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:20.574 [2024-11-18 10:57:46.437041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:20.574 [2024-11-18 10:57:46.437048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:20.574 [2024-11-18 10:57:46.437055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:20.574 [2024-11-18 10:57:46.437090] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:20.574 [2024-11-18 10:57:46.437099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:20.574 [2024-11-18 10:57:46.437116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:20.574 [2024-11-18 10:57:46.437124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:20.574 [2024-11-18 10:57:46.437131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:20.574 [2024-11-18 10:57:46.437140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.574 [2024-11-18 10:57:46.437148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:20.574 [2024-11-18 10:57:46.437155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:29:20.574 [2024-11-18 10:57:46.437162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.836 [2024-11-18 10:57:46.465012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.836 [2024-11-18 10:57:46.465196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:20.836 [2024-11-18 10:57:46.465389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.809 ms 00:29:20.836 [2024-11-18 10:57:46.465434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.836 [2024-11-18 10:57:46.465537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.836 [2024-11-18 10:57:46.465876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:20.836 [2024-11-18 10:57:46.465944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:20.837 [2024-11-18 10:57:46.465967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.514646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.514850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.837 [2024-11-18 10:57:46.514920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.604 ms 00:29:20.837 [2024-11-18 10:57:46.514944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.515010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.515034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.837 [2024-11-18 10:57:46.515055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:20.837 [2024-11-18 10:57:46.515074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.515201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.515249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.837 [2024-11-18 10:57:46.515271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:20.837 [2024-11-18 10:57:46.515292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.515439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.515542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.837 [2024-11-18 10:57:46.515568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:29:20.837 [2024-11-18 10:57:46.515587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.531262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.531420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.837 [2024-11-18 10:57:46.531479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.641 ms 00:29:20.837 [2024-11-18 10:57:46.531502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.531668] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:20.837 [2024-11-18 10:57:46.531709] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:20.837 [2024-11-18 10:57:46.531741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.531765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:20.837 [2024-11-18 10:57:46.531855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:20.837 [2024-11-18 10:57:46.531878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.544171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.544339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:20.837 [2024-11-18 10:57:46.544464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.256 ms 00:29:20.837 [2024-11-18 10:57:46.544488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.544632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.544742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:20.837 [2024-11-18 10:57:46.544768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:20.837 [2024-11-18 10:57:46.544794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.544864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.545040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:20.837 [2024-11-18 10:57:46.545083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:20.837 [2024-11-18 10:57:46.545102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.545745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.545791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:20.837 [2024-11-18 10:57:46.545813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:29:20.837 [2024-11-18 10:57:46.545832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.545920] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:20.837 [2024-11-18 10:57:46.545956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.545975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:20.837 [2024-11-18 10:57:46.545996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:20.837 [2024-11-18 10:57:46.546016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.558624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:20.837 [2024-11-18 10:57:46.558790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.558803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:20.837 [2024-11-18 10:57:46.558815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.700 ms 00:29:20.837 [2024-11-18 10:57:46.558822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.560972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.561009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:20.837 [2024-11-18 10:57:46.561018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:29:20.837 [2024-11-18 10:57:46.561026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.561124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.561135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:20.837 [2024-11-18 10:57:46.561144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:20.837 [2024-11-18 10:57:46.561151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.561175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.561188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:20.837 [2024-11-18 10:57:46.561196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:20.837 [2024-11-18 10:57:46.561227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.561258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:20.837 [2024-11-18 10:57:46.561268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.561277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:20.837 [2024-11-18 10:57:46.561286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:20.837 [2024-11-18 10:57:46.561293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.588127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.588174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:20.837 [2024-11-18 10:57:46.588188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.813 ms 00:29:20.837 [2024-11-18 10:57:46.588196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.588298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.837 [2024-11-18 10:57:46.588311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:20.837 [2024-11-18 10:57:46.588320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:20.837 [2024-11-18 10:57:46.588328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.837 [2024-11-18 10:57:46.589515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.024 ms, result 0 00:29:22.224  [2024-11-18T10:57:49.051Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T10:57:50.004Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-18T10:57:50.948Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-18T10:57:51.892Z] Copying: 54/1024 [MB] (12 MBps) [2024-11-18T10:57:52.835Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-18T10:57:53.779Z] Copying: 77/1024 [MB] (12 MBps) [2024-11-18T10:57:55.167Z] Copying: 93/1024 [MB] (15 MBps) [2024-11-18T10:57:56.111Z] Copying: 108/1024 [MB] (15 MBps) [2024-11-18T10:57:57.053Z] Copying: 128/1024 [MB] (19 MBps) [2024-11-18T10:57:57.996Z] Copying: 145/1024 [MB] (17 MBps) [2024-11-18T10:57:58.938Z] Copying: 160/1024 [MB] (14 MBps) [2024-11-18T10:57:59.877Z] Copying: 179/1024 [MB] (18 MBps) [2024-11-18T10:58:00.821Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-18T10:58:02.205Z] Copying: 211/1024 [MB] (14 MBps) [2024-11-18T10:58:02.777Z] Copying: 240/1024 [MB] (29 MBps) [2024-11-18T10:58:04.164Z] Copying: 259/1024 [MB] (18 MBps) [2024-11-18T10:58:05.118Z] Copying: 279/1024 [MB] (20 MBps) [2024-11-18T10:58:06.064Z] Copying: 298/1024 [MB] (19 MBps) [2024-11-18T10:58:07.008Z] Copying: 320/1024 [MB] (22 MBps) [2024-11-18T10:58:07.947Z] Copying: 338/1024 [MB] (18 MBps) [2024-11-18T10:58:08.890Z] Copying: 353/1024 [MB] (14 MBps) [2024-11-18T10:58:09.831Z] Copying: 367/1024 [MB] (14 MBps) [2024-11-18T10:58:11.217Z] Copying: 387/1024 [MB] (19 MBps) [2024-11-18T10:58:11.790Z] Copying: 405/1024 [MB] (17 MBps) [2024-11-18T10:58:13.176Z] Copying: 426/1024 [MB] (21 MBps) [2024-11-18T10:58:14.160Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-18T10:58:15.142Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-18T10:58:16.085Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-18T10:58:17.030Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-18T10:58:17.976Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-18T10:58:18.921Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-18T10:58:19.862Z] Copying: 500/1024 [MB] (10 MBps) [2024-11-18T10:58:20.807Z] Copying: 511/1024 [MB] (10 MBps) [2024-11-18T10:58:22.195Z] Copying: 523/1024 [MB] (12 MBps) [2024-11-18T10:58:23.139Z] Copying: 534/1024 [MB] (10 MBps) [2024-11-18T10:58:24.080Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-18T10:58:25.022Z] Copying: 555/1024 [MB] (10 MBps) [2024-11-18T10:58:25.965Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-18T10:58:26.908Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-18T10:58:27.852Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-18T10:58:28.793Z] Copying: 597/1024 [MB] (10 MBps) [2024-11-18T10:58:30.179Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-18T10:58:31.125Z] Copying: 618/1024 [MB] (10 MBps) [2024-11-18T10:58:32.067Z] Copying: 629/1024 [MB] (10 MBps) [2024-11-18T10:58:33.009Z] Copying: 640/1024 [MB] (11 MBps) [2024-11-18T10:58:33.955Z] Copying: 653/1024 [MB] (12 MBps) [2024-11-18T10:58:34.898Z] Copying: 676/1024 [MB] (22 MBps) [2024-11-18T10:58:35.843Z] Copying: 701/1024 [MB] (25 MBps) [2024-11-18T10:58:36.787Z] Copying: 717/1024 [MB] (16 MBps) [2024-11-18T10:58:38.172Z] Copying: 736/1024 [MB] (18 MBps) [2024-11-18T10:58:39.112Z] Copying: 756/1024 [MB] (20 MBps) [2024-11-18T10:58:40.051Z] Copying: 769/1024 [MB] (12 MBps) [2024-11-18T10:58:40.989Z] Copying: 788/1024 [MB] (18 MBps) [2024-11-18T10:58:41.928Z] Copying: 816/1024 [MB] (28 MBps) [2024-11-18T10:58:42.868Z] Copying: 838/1024 [MB] (22 MBps) [2024-11-18T10:58:43.807Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-18T10:58:45.190Z] Copying: 860/1024 [MB] (10 MBps) [2024-11-18T10:58:46.135Z] Copying: 888/1024 [MB] (27 MBps) [2024-11-18T10:58:47.129Z] Copying: 899/1024 [MB] (11 MBps) [2024-11-18T10:58:48.071Z] Copying: 912/1024 [MB] (12 MBps) [2024-11-18T10:58:49.013Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-18T10:58:49.954Z] Copying: 940/1024 [MB] (13 MBps) [2024-11-18T10:58:50.898Z] Copying: 960/1024 [MB] (20 MBps) [2024-11-18T10:58:51.842Z] Copying: 979/1024 [MB] (18 MBps) [2024-11-18T10:58:52.415Z] Copying: 1013/1024 [MB] (33 MBps) [2024-11-18T10:58:52.678Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 10:58:52.587349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.794 [2024-11-18 10:58:52.587796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:26.794 [2024-11-18 10:58:52.587837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:26.794 [2024-11-18 10:58:52.587853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.794 [2024-11-18 10:58:52.587905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:26.794 [2024-11-18 10:58:52.591214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.794 [2024-11-18 10:58:52.591260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:26.794 [2024-11-18 10:58:52.591271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:30:26.794 [2024-11-18 10:58:52.591280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.794 [2024-11-18 10:58:52.592259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.794 [2024-11-18 10:58:52.592291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:26.794 [2024-11-18 10:58:52.592304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:30:26.794 [2024-11-18 10:58:52.592313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.794 [2024-11-18 10:58:52.592354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.794 [2024-11-18 10:58:52.592366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:26.794 [2024-11-18 10:58:52.592376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:26.794 [2024-11-18 10:58:52.592386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.794 [2024-11-18 10:58:52.592466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.794 [2024-11-18 10:58:52.592478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:26.794 [2024-11-18 10:58:52.592488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:26.794 [2024-11-18 10:58:52.592498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.794 [2024-11-18 10:58:52.592513] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:26.794 [2024-11-18 10:58:52.592528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.592864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:26.794 [2024-11-18 10:58:52.593169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:26.795 [2024-11-18 10:58:52.593520] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:26.795 [2024-11-18 10:58:52.593531] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:30:26.795 [2024-11-18 10:58:52.593539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:26.795 [2024-11-18 10:58:52.593547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:26.795 [2024-11-18 10:58:52.593554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:26.795 [2024-11-18 10:58:52.593562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:26.795 [2024-11-18 10:58:52.593569] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:26.795 [2024-11-18 10:58:52.593577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:26.795 [2024-11-18 10:58:52.593586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:26.795 [2024-11-18 10:58:52.593593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:26.795 [2024-11-18 10:58:52.593599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:26.795 [2024-11-18 10:58:52.593607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.795 [2024-11-18 10:58:52.593614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:26.795 [2024-11-18 10:58:52.593623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:30:26.795 [2024-11-18 10:58:52.593631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.609091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.795 [2024-11-18 10:58:52.609144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:26.795 [2024-11-18 10:58:52.609158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.441 ms 00:30:26.795 [2024-11-18 10:58:52.609166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.609583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.795 [2024-11-18 10:58:52.609595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:26.795 [2024-11-18 10:58:52.609613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:30:26.795 [2024-11-18 10:58:52.609622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.648474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.795 [2024-11-18 10:58:52.648533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:26.795 [2024-11-18 10:58:52.648548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.795 [2024-11-18 10:58:52.648558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.648645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.795 [2024-11-18 10:58:52.648657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:26.795 [2024-11-18 10:58:52.648672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.795 [2024-11-18 10:58:52.648682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.648744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.795 [2024-11-18 10:58:52.648756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:26.795 [2024-11-18 10:58:52.648765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.795 [2024-11-18 10:58:52.648772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.795 [2024-11-18 10:58:52.648789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.795 [2024-11-18 10:58:52.648797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:26.795 [2024-11-18 10:58:52.648805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.795 [2024-11-18 10:58:52.648817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.056 [2024-11-18 10:58:52.735571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.056 [2024-11-18 10:58:52.735635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.056 [2024-11-18 10:58:52.735650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.056 [2024-11-18 10:58:52.735659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.056 [2024-11-18 10:58:52.806841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.056 [2024-11-18 10:58:52.807069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.056 [2024-11-18 10:58:52.807092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.056 [2024-11-18 10:58:52.807110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.056 [2024-11-18 10:58:52.807197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.056 [2024-11-18 10:58:52.807236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.056 [2024-11-18 10:58:52.807247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.056 [2024-11-18 10:58:52.807255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.056 [2024-11-18 10:58:52.807301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.056 [2024-11-18 10:58:52.807311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.056 [2024-11-18 10:58:52.807320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.056 [2024-11-18 10:58:52.807328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.056 [2024-11-18 10:58:52.807432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.056 [2024-11-18 10:58:52.807444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.056 [2024-11-18 10:58:52.807453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.057 [2024-11-18 10:58:52.807462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.057 [2024-11-18 10:58:52.807491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.057 [2024-11-18 10:58:52.807501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:27.057 [2024-11-18 10:58:52.807509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.057 [2024-11-18 10:58:52.807519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.057 [2024-11-18 10:58:52.807564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.057 [2024-11-18 10:58:52.807574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.057 [2024-11-18 10:58:52.807582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.057 [2024-11-18 10:58:52.807590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.057 [2024-11-18 10:58:52.807636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.057 [2024-11-18 10:58:52.807647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.057 [2024-11-18 10:58:52.807656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.057 [2024-11-18 10:58:52.807664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.057 [2024-11-18 10:58:52.807803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 220.432 ms, result 0 00:30:28.000 00:30:28.000 00:30:28.000 10:58:53 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:29.916 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:29.916 10:58:55 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:29.916 [2024-11-18 10:58:55.763993] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:29.916 [2024-11-18 10:58:55.764085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82816 ] 00:30:30.177 [2024-11-18 10:58:55.914240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.177 [2024-11-18 10:58:55.991936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.438 [2024-11-18 10:58:56.197664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:30.438 [2024-11-18 10:58:56.197715] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:30.700 [2024-11-18 10:58:56.345290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.345324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:30.700 [2024-11-18 10:58:56.345337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:30.700 [2024-11-18 10:58:56.345344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.345377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.345384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:30.700 [2024-11-18 10:58:56.345392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:30.700 [2024-11-18 10:58:56.345398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.345411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:30.700 [2024-11-18 10:58:56.345903] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:30.700 [2024-11-18 10:58:56.345915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.345921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:30.700 [2024-11-18 10:58:56.345927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:30:30.700 [2024-11-18 10:58:56.345933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:30.700 [2024-11-18 10:58:56.346135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.346141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:30.700 [2024-11-18 10:58:56.346150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:30.700 [2024-11-18 10:58:56.346155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.346233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:30.700 [2024-11-18 10:58:56.346239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:30.700 [2024-11-18 10:58:56.346245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.346451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:30.700 [2024-11-18 10:58:56.346457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:30:30.700 [2024-11-18 10:58:56.346463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.346519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:30.700 [2024-11-18 10:58:56.346525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:30.700 [2024-11-18 10:58:56.346530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.346552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:30.700 [2024-11-18 10:58:56.346559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:30.700 [2024-11-18 10:58:56.346566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.346578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:30.700 [2024-11-18 10:58:56.349446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.349473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:30.700 [2024-11-18 10:58:56.349481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:30:30.700 [2024-11-18 10:58:56.349486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.700 [2024-11-18 10:58:56.349512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.700 [2024-11-18 10:58:56.349519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:30.700 [2024-11-18 10:58:56.349525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:30.701 [2024-11-18 10:58:56.349530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.701 [2024-11-18 10:58:56.349562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:30.701 [2024-11-18 10:58:56.349578] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:30.701 [2024-11-18 10:58:56.349605] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:30.701 [2024-11-18 10:58:56.349617] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:30.701 [2024-11-18 10:58:56.349695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:30.701 [2024-11-18 10:58:56.349703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:30.701 [2024-11-18 10:58:56.349711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:30.701 [2024-11-18 10:58:56.349718] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:30.701 [2024-11-18 10:58:56.349724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:30.701 [2024-11-18 10:58:56.349730] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:30.701 [2024-11-18 10:58:56.349737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:30.701 [2024-11-18 10:58:56.349743] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:30.701 [2024-11-18 10:58:56.349749] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:30.701 [2024-11-18 10:58:56.349754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.701 [2024-11-18 10:58:56.349760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:30.701 [2024-11-18 10:58:56.349765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:30:30.701 [2024-11-18 10:58:56.349771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.701 [2024-11-18 10:58:56.349833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.701 [2024-11-18 10:58:56.349839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:30.701 [2024-11-18 10:58:56.349844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:30.701 [2024-11-18 10:58:56.349851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.701 [2024-11-18 10:58:56.349926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:30.701 [2024-11-18 10:58:56.349933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:30.701 [2024-11-18 10:58:56.349940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:30.701 [2024-11-18 10:58:56.349946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.349951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:30.701 [2024-11-18 10:58:56.349956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.349961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:30.701 [2024-11-18 10:58:56.349968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:30.701 [2024-11-18 10:58:56.349973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:30.701 [2024-11-18 10:58:56.349978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:30.701 [2024-11-18 10:58:56.349983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:30.701 [2024-11-18 10:58:56.349989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:30.701 [2024-11-18 10:58:56.349994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:30.701 [2024-11-18 10:58:56.349999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:30.701 [2024-11-18 10:58:56.350004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:30.701 [2024-11-18 10:58:56.350009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:30.701 [2024-11-18 10:58:56.350023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:30.701 [2024-11-18 10:58:56.350038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:30.701 [2024-11-18 10:58:56.350052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:30.701 [2024-11-18 10:58:56.350067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:30.701 [2024-11-18 10:58:56.350081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:30.701 [2024-11-18 10:58:56.350095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:30.701 [2024-11-18 10:58:56.350105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:30.701 [2024-11-18 10:58:56.350109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:30.701 [2024-11-18 10:58:56.350114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:30.701 [2024-11-18 10:58:56.350119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:30.701 [2024-11-18 10:58:56.350124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:30.701 [2024-11-18 10:58:56.350128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:30.701 [2024-11-18 10:58:56.350137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:30.701 [2024-11-18 10:58:56.350143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:30.701 [2024-11-18 10:58:56.350154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:30.701 [2024-11-18 10:58:56.350160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:30.701 [2024-11-18 10:58:56.350171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:30.701 [2024-11-18 10:58:56.350176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:30.701 [2024-11-18 10:58:56.350180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:30.701 [2024-11-18 10:58:56.350185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:30.701 [2024-11-18 10:58:56.350190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:30.701 [2024-11-18 10:58:56.350197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:30.701 [2024-11-18 10:58:56.350218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:30.701 [2024-11-18 10:58:56.350227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:30.701 [2024-11-18 10:58:56.350239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:30.701 [2024-11-18 10:58:56.350244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:30.701 [2024-11-18 10:58:56.350249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:30.701 [2024-11-18 10:58:56.350255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:30.701 [2024-11-18 10:58:56.350260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:30.701 [2024-11-18 10:58:56.350265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:30.701 [2024-11-18 10:58:56.350270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:30.701 [2024-11-18 10:58:56.350276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:30.701 [2024-11-18 10:58:56.350281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:30.701 [2024-11-18 10:58:56.350307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:30.701 [2024-11-18 10:58:56.350313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:30.701 [2024-11-18 10:58:56.350325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:30.701 [2024-11-18 10:58:56.350331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:30.701 [2024-11-18 10:58:56.350336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:30.702 [2024-11-18 10:58:56.350342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.350347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:30.702 [2024-11-18 10:58:56.350353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:30:30.702 [2024-11-18 10:58:56.350358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.368837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.368864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:30.702 [2024-11-18 10:58:56.368872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.449 ms 00:30:30.702 [2024-11-18 10:58:56.368877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.368940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.368946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:30.702 [2024-11-18 10:58:56.368952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:30.702 [2024-11-18 10:58:56.368959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.408196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.408233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:30.702 [2024-11-18 10:58:56.408243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.200 ms 00:30:30.702 [2024-11-18 10:58:56.408249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.408281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.408289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:30.702 [2024-11-18 10:58:56.408295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:30.702 [2024-11-18 10:58:56.408301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.408375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.408383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:30.702 [2024-11-18 10:58:56.408389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:30.702 [2024-11-18 10:58:56.408395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.408501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.408510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:30.702 [2024-11-18 10:58:56.408517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:30:30.702 [2024-11-18 10:58:56.408523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.419056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.419085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:30.702 [2024-11-18 10:58:56.419092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.519 ms 00:30:30.702 [2024-11-18 10:58:56.419098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.419179] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:30.702 [2024-11-18 10:58:56.419189] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:30.702 [2024-11-18 10:58:56.419196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.419224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:30.702 [2024-11-18 10:58:56.419234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:30.702 [2024-11-18 10:58:56.419240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.428380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.428407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:30.702 [2024-11-18 10:58:56.428415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.128 ms 00:30:30.702 [2024-11-18 10:58:56.428422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.428508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.428515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:30.702 [2024-11-18 10:58:56.428521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:30.702 [2024-11-18 10:58:56.428526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.428553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.428560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:30.702 [2024-11-18 10:58:56.428566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:30.702 [2024-11-18 10:58:56.428572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.429032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.429049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:30.702 [2024-11-18 10:58:56.429059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:30:30.702 [2024-11-18 10:58:56.429067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.429087] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:30.702 [2024-11-18 10:58:56.429103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.429122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:30.702 [2024-11-18 10:58:56.429132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:30.702 [2024-11-18 10:58:56.429141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.437705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:30.702 [2024-11-18 10:58:56.437814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.437826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:30.702 [2024-11-18 10:58:56.437833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.654 ms 00:30:30.702 [2024-11-18 10:58:56.437839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.439417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.439516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:30.702 [2024-11-18 10:58:56.439531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:30:30.702 [2024-11-18 10:58:56.439537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.439599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.439606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:30.702 [2024-11-18 10:58:56.439613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:30.702 [2024-11-18 10:58:56.439618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.439643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.439650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:30.702 [2024-11-18 10:58:56.439658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:30.702 [2024-11-18 10:58:56.439664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.439685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:30.702 [2024-11-18 10:58:56.439692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.439698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:30.702 [2024-11-18 10:58:56.439703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:30.702 [2024-11-18 10:58:56.439709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.458329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.458429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:30.702 [2024-11-18 10:58:56.458441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.605 ms 00:30:30.702 [2024-11-18 10:58:56.458448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.458500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.702 [2024-11-18 10:58:56.458508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:30.702 [2024-11-18 10:58:56.458515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:30.702 [2024-11-18 10:58:56.458521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.702 [2024-11-18 10:58:56.459226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.611 ms, result 0 00:30:31.645  [2024-11-18T10:58:58.914Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-18T10:58:59.484Z] Copying: 65/1024 [MB] (31 MBps) [2024-11-18T10:59:00.869Z] Copying: 93/1024 [MB] (28 MBps) [2024-11-18T10:59:01.811Z] Copying: 107/1024 [MB] (13 MBps) [2024-11-18T10:59:02.753Z] Copying: 125/1024 [MB] (18 MBps) [2024-11-18T10:59:03.696Z] Copying: 138/1024 [MB] (12 MBps) [2024-11-18T10:59:04.640Z] Copying: 156/1024 [MB] (17 MBps) [2024-11-18T10:59:05.583Z] Copying: 171/1024 [MB] (15 MBps) [2024-11-18T10:59:06.528Z] Copying: 189/1024 [MB] (17 MBps) [2024-11-18T10:59:07.914Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-18T10:59:08.486Z] Copying: 220/1024 [MB] (20 MBps) [2024-11-18T10:59:09.876Z] Copying: 238/1024 [MB] (17 MBps) [2024-11-18T10:59:10.819Z] Copying: 261/1024 [MB] (23 MBps) [2024-11-18T10:59:11.765Z] Copying: 276/1024 [MB] (14 MBps) [2024-11-18T10:59:12.709Z] Copying: 296/1024 [MB] (19 MBps) [2024-11-18T10:59:13.654Z] Copying: 315/1024 [MB] (19 MBps) [2024-11-18T10:59:14.598Z] Copying: 331/1024 [MB] (15 MBps) [2024-11-18T10:59:15.538Z] Copying: 344/1024 [MB] (12 MBps) [2024-11-18T10:59:16.484Z] Copying: 361/1024 [MB] (17 MBps) [2024-11-18T10:59:17.946Z] Copying: 373/1024 [MB] (11 MBps) [2024-11-18T10:59:18.519Z] Copying: 396/1024 [MB] (23 MBps) [2024-11-18T10:59:19.902Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-18T10:59:20.474Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-18T10:59:21.860Z] Copying: 434/1024 [MB] (15 MBps) [2024-11-18T10:59:22.803Z] Copying: 447/1024 [MB] (13 MBps) [2024-11-18T10:59:23.749Z] Copying: 468756/1048576 [kB] (10208 kBps) [2024-11-18T10:59:24.692Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-18T10:59:25.636Z] Copying: 483/1024 [MB] (15 MBps) [2024-11-18T10:59:26.580Z] Copying: 498/1024 [MB] (14 MBps) [2024-11-18T10:59:27.525Z] Copying: 516/1024 [MB] (18 MBps) [2024-11-18T10:59:28.912Z] Copying: 533/1024 [MB] (16 MBps) [2024-11-18T10:59:29.484Z] Copying: 555/1024 [MB] (22 MBps) [2024-11-18T10:59:30.870Z] Copying: 573/1024 [MB] (17 MBps) [2024-11-18T10:59:31.814Z] Copying: 584/1024 [MB] (10 MBps) [2024-11-18T10:59:32.756Z] Copying: 605/1024 [MB] (21 MBps) [2024-11-18T10:59:33.698Z] Copying: 624/1024 [MB] (19 MBps) [2024-11-18T10:59:34.642Z] Copying: 644/1024 [MB] (19 MBps) [2024-11-18T10:59:35.585Z] Copying: 681/1024 [MB] (36 MBps) [2024-11-18T10:59:36.528Z] Copying: 724/1024 [MB] (43 MBps) [2024-11-18T10:59:37.915Z] Copying: 739/1024 [MB] (14 MBps) [2024-11-18T10:59:38.487Z] Copying: 752/1024 [MB] (13 MBps) [2024-11-18T10:59:39.874Z] Copying: 767/1024 [MB] (14 MBps) [2024-11-18T10:59:40.818Z] Copying: 796008/1048576 [kB] (10184 kBps) [2024-11-18T10:59:41.761Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-18T10:59:42.704Z] Copying: 826/1024 [MB] (37 MBps) [2024-11-18T10:59:43.647Z] Copying: 858/1024 [MB] (32 MBps) [2024-11-18T10:59:44.588Z] Copying: 890/1024 [MB] (31 MBps) [2024-11-18T10:59:45.532Z] Copying: 907/1024 [MB] (16 MBps) [2024-11-18T10:59:46.476Z] Copying: 921/1024 [MB] (14 MBps) [2024-11-18T10:59:47.861Z] Copying: 935/1024 [MB] (13 MBps) [2024-11-18T10:59:48.804Z] Copying: 953/1024 [MB] (18 MBps) [2024-11-18T10:59:49.773Z] Copying: 965/1024 [MB] (12 MBps) [2024-11-18T10:59:50.742Z] Copying: 981/1024 [MB] (15 MBps) [2024-11-18T10:59:51.685Z] Copying: 1009/1024 [MB] (28 MBps) [2024-11-18T10:59:52.257Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-18T10:59:52.257Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-18 10:59:52.224581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.373 [2024-11-18 10:59:52.224660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:26.373 [2024-11-18 10:59:52.224679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:26.374 [2024-11-18 10:59:52.224688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.374 [2024-11-18 10:59:52.227976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:26.374 [2024-11-18 10:59:52.232964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.374 [2024-11-18 10:59:52.233090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:26.374 [2024-11-18 10:59:52.233154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:31:26.374 [2024-11-18 10:59:52.233181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.374 [2024-11-18 10:59:52.244487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.374 [2024-11-18 10:59:52.244660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:26.374 [2024-11-18 10:59:52.244741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.048 ms 00:31:26.374 [2024-11-18 10:59:52.244767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.374 [2024-11-18 10:59:52.244817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.374 [2024-11-18 10:59:52.244871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:26.374 [2024-11-18 10:59:52.244903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:26.374 [2024-11-18 10:59:52.244923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.374 [2024-11-18 10:59:52.245152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.374 [2024-11-18 10:59:52.245201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:26.374 [2024-11-18 10:59:52.245250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:26.374 [2024-11-18 10:59:52.245270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.374 [2024-11-18 10:59:52.245298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:26.374 [2024-11-18 10:59:52.245324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:31:26.374 [2024-11-18 10:59:52.245357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.245962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.246944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.247899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.248991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:26.374 [2024-11-18 10:59:52.249283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:26.375 [2024-11-18 10:59:52.249720] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:26.375 [2024-11-18 10:59:52.249731] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:31:26.375 [2024-11-18 10:59:52.249739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:31:26.375 [2024-11-18 10:59:52.249749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:31:26.375 [2024-11-18 10:59:52.249757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:31:26.375 [2024-11-18 10:59:52.249767] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:26.375 [2024-11-18 10:59:52.249775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:26.375 [2024-11-18 10:59:52.249784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:26.375 [2024-11-18 10:59:52.249797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:26.375 [2024-11-18 10:59:52.249804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:26.375 [2024-11-18 10:59:52.249812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:26.375 [2024-11-18 10:59:52.249822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.375 [2024-11-18 10:59:52.249830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:26.375 [2024-11-18 10:59:52.249840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.524 ms 00:31:26.375 [2024-11-18 10:59:52.249848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.636 [2024-11-18 10:59:52.263586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.636 [2024-11-18 10:59:52.263640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:26.636 [2024-11-18 10:59:52.263653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.709 ms 00:31:26.636 [2024-11-18 10:59:52.263670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.636 [2024-11-18 10:59:52.264087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.636 [2024-11-18 10:59:52.264098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:26.636 [2024-11-18 10:59:52.264107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:31:26.636 [2024-11-18 10:59:52.264114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.636 [2024-11-18 10:59:52.301227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.636 [2024-11-18 10:59:52.301278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:26.636 [2024-11-18 10:59:52.301294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.636 [2024-11-18 10:59:52.301303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.636 [2024-11-18 10:59:52.301369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.636 [2024-11-18 10:59:52.301379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:26.636 [2024-11-18 10:59:52.301388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.636 [2024-11-18 10:59:52.301397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.636 [2024-11-18 10:59:52.301458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.636 [2024-11-18 10:59:52.301471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:26.637 [2024-11-18 10:59:52.301480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.301493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.301511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.301520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:26.637 [2024-11-18 10:59:52.301529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.301538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.385729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.385966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:26.637 [2024-11-18 10:59:52.385996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.386005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:26.637 [2024-11-18 10:59:52.455328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:26.637 [2024-11-18 10:59:52.455446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:26.637 [2024-11-18 10:59:52.455516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:26.637 [2024-11-18 10:59:52.455630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:26.637 [2024-11-18 10:59:52.455686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:26.637 [2024-11-18 10:59:52.455751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.637 [2024-11-18 10:59:52.455817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:26.637 [2024-11-18 10:59:52.455826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.637 [2024-11-18 10:59:52.455834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.637 [2024-11-18 10:59:52.455967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 233.732 ms, result 0 00:31:28.023 00:31:28.023 00:31:28.023 10:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:28.023 [2024-11-18 10:59:53.791797] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:31:28.023 [2024-11-18 10:59:53.791949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83406 ] 00:31:28.284 [2024-11-18 10:59:53.950137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.284 [2024-11-18 10:59:54.069548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.545 [2024-11-18 10:59:54.359504] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.545 [2024-11-18 10:59:54.359586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.807 [2024-11-18 10:59:54.521193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.521263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:28.807 [2024-11-18 10:59:54.521284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:28.807 [2024-11-18 10:59:54.521294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.521350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.521361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:28.807 [2024-11-18 10:59:54.521373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:28.807 [2024-11-18 10:59:54.521381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.521402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:28.807 [2024-11-18 10:59:54.522125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:28.807 [2024-11-18 10:59:54.522154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.522162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:28.807 [2024-11-18 10:59:54.522172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:31:28.807 [2024-11-18 10:59:54.522179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.522640] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:28.807 [2024-11-18 10:59:54.522697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.522707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:28.807 [2024-11-18 10:59:54.522723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:28.807 [2024-11-18 10:59:54.522732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.522788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.522798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:28.807 [2024-11-18 10:59:54.522806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:28.807 [2024-11-18 10:59:54.522813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.523145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.523159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:28.807 [2024-11-18 10:59:54.523169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:31:28.807 [2024-11-18 10:59:54.523177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.523278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.523289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:28.807 [2024-11-18 10:59:54.523297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:28.807 [2024-11-18 10:59:54.523305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.523329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.523337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:28.807 [2024-11-18 10:59:54.523346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:28.807 [2024-11-18 10:59:54.523357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.523378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:28.807 [2024-11-18 10:59:54.527739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.527787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:28.807 [2024-11-18 10:59:54.527798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:31:28.807 [2024-11-18 10:59:54.527806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.527841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.527850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:28.807 [2024-11-18 10:59:54.527858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:28.807 [2024-11-18 10:59:54.527865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.527925] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:28.807 [2024-11-18 10:59:54.527950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:28.807 [2024-11-18 10:59:54.527991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:28.807 [2024-11-18 10:59:54.528006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:28.807 [2024-11-18 10:59:54.528111] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:28.807 [2024-11-18 10:59:54.528123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:28.807 [2024-11-18 10:59:54.528133] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:28.807 [2024-11-18 10:59:54.528144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:28.807 [2024-11-18 10:59:54.528154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:28.807 [2024-11-18 10:59:54.528161] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:28.807 [2024-11-18 10:59:54.528172] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:28.807 [2024-11-18 10:59:54.528181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:28.807 [2024-11-18 10:59:54.528188] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:28.807 [2024-11-18 10:59:54.528195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.528216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:28.807 [2024-11-18 10:59:54.528224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:31:28.807 [2024-11-18 10:59:54.528232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.528326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.807 [2024-11-18 10:59:54.528336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:28.807 [2024-11-18 10:59:54.528345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:28.807 [2024-11-18 10:59:54.528356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.807 [2024-11-18 10:59:54.528474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:28.807 [2024-11-18 10:59:54.528486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:28.807 [2024-11-18 10:59:54.528495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.807 [2024-11-18 10:59:54.528503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.807 [2024-11-18 10:59:54.528511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:28.807 [2024-11-18 10:59:54.528518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:28.807 [2024-11-18 10:59:54.528526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:28.807 [2024-11-18 10:59:54.528533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:28.807 [2024-11-18 10:59:54.528539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:28.807 [2024-11-18 10:59:54.528546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.807 [2024-11-18 10:59:54.528552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:28.807 [2024-11-18 10:59:54.528560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:28.807 [2024-11-18 10:59:54.528567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.807 [2024-11-18 10:59:54.528575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:28.807 [2024-11-18 10:59:54.528583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:28.808 [2024-11-18 10:59:54.528590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:28.808 [2024-11-18 10:59:54.528609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:28.808 [2024-11-18 10:59:54.528629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:28.808 [2024-11-18 10:59:54.528650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:28.808 [2024-11-18 10:59:54.528669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:28.808 [2024-11-18 10:59:54.528689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:28.808 [2024-11-18 10:59:54.528708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.808 [2024-11-18 10:59:54.528721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:28.808 [2024-11-18 10:59:54.528728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:28.808 [2024-11-18 10:59:54.528734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.808 [2024-11-18 10:59:54.528740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:28.808 [2024-11-18 10:59:54.528747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:28.808 [2024-11-18 10:59:54.528755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:28.808 [2024-11-18 10:59:54.528771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:28.808 [2024-11-18 10:59:54.528778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528786] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:28.808 [2024-11-18 10:59:54.528794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:28.808 [2024-11-18 10:59:54.528801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.808 [2024-11-18 10:59:54.528818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:28.808 [2024-11-18 10:59:54.528825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:28.808 [2024-11-18 10:59:54.528831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:28.808 [2024-11-18 10:59:54.528838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:28.808 [2024-11-18 10:59:54.528846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:28.808 [2024-11-18 10:59:54.528852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:28.808 [2024-11-18 10:59:54.528860] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:28.808 [2024-11-18 10:59:54.528872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:28.808 [2024-11-18 10:59:54.528888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:28.808 [2024-11-18 10:59:54.528895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:28.808 [2024-11-18 10:59:54.528902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:28.808 [2024-11-18 10:59:54.528910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:28.808 [2024-11-18 10:59:54.528918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:28.808 [2024-11-18 10:59:54.528924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:28.808 [2024-11-18 10:59:54.528932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:28.808 [2024-11-18 10:59:54.528939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:28.808 [2024-11-18 10:59:54.528945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:28.808 [2024-11-18 10:59:54.528981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:28.808 [2024-11-18 10:59:54.528989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.528998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.808 [2024-11-18 10:59:54.529005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:28.808 [2024-11-18 10:59:54.529011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:28.808 [2024-11-18 10:59:54.529019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:28.808 [2024-11-18 10:59:54.529028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.529036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:28.808 [2024-11-18 10:59:54.529044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:31:28.808 [2024-11-18 10:59:54.529052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.558222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.558271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:28.808 [2024-11-18 10:59:54.558283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.129 ms 00:31:28.808 [2024-11-18 10:59:54.558294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.558387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.558396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:28.808 [2024-11-18 10:59:54.558405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:28.808 [2024-11-18 10:59:54.558417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.609442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.609499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.808 [2024-11-18 10:59:54.609513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.964 ms 00:31:28.808 [2024-11-18 10:59:54.609522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.609577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.609589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.808 [2024-11-18 10:59:54.609598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:28.808 [2024-11-18 10:59:54.609607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.609734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.609746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.808 [2024-11-18 10:59:54.609756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:28.808 [2024-11-18 10:59:54.609764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.609897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.609910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.808 [2024-11-18 10:59:54.609919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:28.808 [2024-11-18 10:59:54.609927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.626016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.626070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.808 [2024-11-18 10:59:54.626082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.068 ms 00:31:28.808 [2024-11-18 10:59:54.626090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.808 [2024-11-18 10:59:54.626282] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:28.808 [2024-11-18 10:59:54.626298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:28.808 [2024-11-18 10:59:54.626308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.808 [2024-11-18 10:59:54.626319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:28.808 [2024-11-18 10:59:54.626329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:28.808 [2024-11-18 10:59:54.626336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.638624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.638668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:28.809 [2024-11-18 10:59:54.638679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:31:28.809 [2024-11-18 10:59:54.638687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.638820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.638830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:28.809 [2024-11-18 10:59:54.638839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:28.809 [2024-11-18 10:59:54.638854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.638906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.638916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:28.809 [2024-11-18 10:59:54.638925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:28.809 [2024-11-18 10:59:54.638932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.639594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.639609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:28.809 [2024-11-18 10:59:54.639619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:31:28.809 [2024-11-18 10:59:54.639626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.639645] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:28.809 [2024-11-18 10:59:54.639659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.639668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:28.809 [2024-11-18 10:59:54.639676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:28.809 [2024-11-18 10:59:54.639684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.652537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:28.809 [2024-11-18 10:59:54.652883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.652902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:28.809 [2024-11-18 10:59:54.652912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:31:28.809 [2024-11-18 10:59:54.652921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.655240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.655280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:28.809 [2024-11-18 10:59:54.655291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:31:28.809 [2024-11-18 10:59:54.655299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.655384] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:28.809 [2024-11-18 10:59:54.655858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.655869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:28.809 [2024-11-18 10:59:54.655878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:31:28.809 [2024-11-18 10:59:54.655887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.655914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.655928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:28.809 [2024-11-18 10:59:54.655938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:28.809 [2024-11-18 10:59:54.655946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.655980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:28.809 [2024-11-18 10:59:54.655991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.655999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:28.809 [2024-11-18 10:59:54.656008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:28.809 [2024-11-18 10:59:54.656015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.683542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.683745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:28.809 [2024-11-18 10:59:54.683768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.508 ms 00:31:28.809 [2024-11-18 10:59:54.683777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.683865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.809 [2024-11-18 10:59:54.683876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:28.809 [2024-11-18 10:59:54.683886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:28.809 [2024-11-18 10:59:54.683894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.809 [2024-11-18 10:59:54.685242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.542 ms, result 0 00:31:30.196  [2024-11-18T10:59:57.024Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T10:59:57.967Z] Copying: 37/1024 [MB] (17 MBps) [2024-11-18T10:59:58.909Z] Copying: 62/1024 [MB] (25 MBps) [2024-11-18T11:00:00.292Z] Copying: 74/1024 [MB] (11 MBps) [2024-11-18T11:00:01.230Z] Copying: 86/1024 [MB] (11 MBps) [2024-11-18T11:00:02.172Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-18T11:00:03.114Z] Copying: 109/1024 [MB] (10 MBps) [2024-11-18T11:00:04.058Z] Copying: 140/1024 [MB] (30 MBps) [2024-11-18T11:00:05.000Z] Copying: 165/1024 [MB] (25 MBps) [2024-11-18T11:00:05.943Z] Copying: 178/1024 [MB] (12 MBps) [2024-11-18T11:00:07.331Z] Copying: 189/1024 [MB] (10 MBps) [2024-11-18T11:00:07.905Z] Copying: 201/1024 [MB] (11 MBps) [2024-11-18T11:00:09.290Z] Copying: 212/1024 [MB] (11 MBps) [2024-11-18T11:00:10.234Z] Copying: 234/1024 [MB] (22 MBps) [2024-11-18T11:00:11.179Z] Copying: 256/1024 [MB] (21 MBps) [2024-11-18T11:00:12.122Z] Copying: 276/1024 [MB] (20 MBps) [2024-11-18T11:00:13.066Z] Copying: 295/1024 [MB] (18 MBps) [2024-11-18T11:00:14.010Z] Copying: 315/1024 [MB] (20 MBps) [2024-11-18T11:00:14.953Z] Copying: 332/1024 [MB] (17 MBps) [2024-11-18T11:00:15.897Z] Copying: 348/1024 [MB] (15 MBps) [2024-11-18T11:00:17.286Z] Copying: 360/1024 [MB] (12 MBps) [2024-11-18T11:00:18.232Z] Copying: 374/1024 [MB] (14 MBps) [2024-11-18T11:00:19.175Z] Copying: 397/1024 [MB] (22 MBps) [2024-11-18T11:00:20.110Z] Copying: 413/1024 [MB] (16 MBps) [2024-11-18T11:00:21.109Z] Copying: 425/1024 [MB] (12 MBps) [2024-11-18T11:00:22.054Z] Copying: 447/1024 [MB] (22 MBps) [2024-11-18T11:00:22.998Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-18T11:00:23.942Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-18T11:00:25.327Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-18T11:00:25.898Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-18T11:00:27.277Z] Copying: 500/1024 [MB] (10 MBps) [2024-11-18T11:00:28.212Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-18T11:00:29.147Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-18T11:00:30.088Z] Copying: 535/1024 [MB] (11 MBps) [2024-11-18T11:00:31.026Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-18T11:00:31.962Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-18T11:00:32.897Z] Copying: 569/1024 [MB] (12 MBps) [2024-11-18T11:00:34.274Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-18T11:00:35.217Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-18T11:00:36.158Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-18T11:00:37.100Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-18T11:00:38.043Z] Copying: 626/1024 [MB] (11 MBps) [2024-11-18T11:00:38.987Z] Copying: 638/1024 [MB] (12 MBps) [2024-11-18T11:00:39.929Z] Copying: 650/1024 [MB] (11 MBps) [2024-11-18T11:00:41.314Z] Copying: 662/1024 [MB] (12 MBps) [2024-11-18T11:00:42.256Z] Copying: 674/1024 [MB] (11 MBps) [2024-11-18T11:00:43.200Z] Copying: 697/1024 [MB] (22 MBps) [2024-11-18T11:00:44.143Z] Copying: 722/1024 [MB] (24 MBps) [2024-11-18T11:00:45.087Z] Copying: 740/1024 [MB] (18 MBps) [2024-11-18T11:00:46.031Z] Copying: 775/1024 [MB] (34 MBps) [2024-11-18T11:00:46.975Z] Copying: 797/1024 [MB] (22 MBps) [2024-11-18T11:00:47.919Z] Copying: 820/1024 [MB] (22 MBps) [2024-11-18T11:00:49.310Z] Copying: 838/1024 [MB] (18 MBps) [2024-11-18T11:00:50.257Z] Copying: 856/1024 [MB] (18 MBps) [2024-11-18T11:00:51.202Z] Copying: 870/1024 [MB] (13 MBps) [2024-11-18T11:00:52.146Z] Copying: 883/1024 [MB] (12 MBps) [2024-11-18T11:00:53.147Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-18T11:00:54.090Z] Copying: 925/1024 [MB] (23 MBps) [2024-11-18T11:00:55.032Z] Copying: 947/1024 [MB] (22 MBps) [2024-11-18T11:00:55.976Z] Copying: 962/1024 [MB] (15 MBps) [2024-11-18T11:00:56.919Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-18T11:00:58.337Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-18T11:00:58.906Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-18T11:00:59.476Z] Copying: 1017/1024 [MB] (22 MBps) [2024-11-18T11:00:59.739Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 11:00:59.643309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.855 [2024-11-18 11:00:59.643665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:33.855 [2024-11-18 11:00:59.643952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:33.855 [2024-11-18 11:00:59.644006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.855 [2024-11-18 11:00:59.644062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:33.855 [2024-11-18 11:00:59.647911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.855 [2024-11-18 11:00:59.648230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:33.856 [2024-11-18 11:00:59.648318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:32:33.856 [2024-11-18 11:00:59.648331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.856 [2024-11-18 11:00:59.648600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.856 [2024-11-18 11:00:59.648612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:33.856 [2024-11-18 11:00:59.648623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:32:33.856 [2024-11-18 11:00:59.648632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.856 [2024-11-18 11:00:59.648661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.856 [2024-11-18 11:00:59.648671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:33.856 [2024-11-18 11:00:59.648680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:33.856 [2024-11-18 11:00:59.648688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.856 [2024-11-18 11:00:59.648747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.856 [2024-11-18 11:00:59.648757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:33.856 [2024-11-18 11:00:59.648769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:33.856 [2024-11-18 11:00:59.648777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.856 [2024-11-18 11:00:59.648792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:33.856 [2024-11-18 11:00:59.648805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:33.856 [2024-11-18 11:00:59.648814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.648996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:33.856 [2024-11-18 11:00:59.649502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:33.857 [2024-11-18 11:00:59.649739] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:33.857 [2024-11-18 11:00:59.649747] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 75642e4f-01b4-4066-b9aa-775e5d86fcd1 00:32:33.857 [2024-11-18 11:00:59.649756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:33.857 [2024-11-18 11:00:59.649763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:32:33.857 [2024-11-18 11:00:59.649770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:32:33.857 [2024-11-18 11:00:59.649778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:32:33.857 [2024-11-18 11:00:59.649785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:33.857 [2024-11-18 11:00:59.649796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:33.857 [2024-11-18 11:00:59.649804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:33.857 [2024-11-18 11:00:59.649811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:33.857 [2024-11-18 11:00:59.649818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:33.857 [2024-11-18 11:00:59.649825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.857 [2024-11-18 11:00:59.649833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:33.857 [2024-11-18 11:00:59.649842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:32:33.857 [2024-11-18 11:00:59.649849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.665006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.857 [2024-11-18 11:00:59.665044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:33.857 [2024-11-18 11:00:59.665063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.138 ms 00:32:33.857 [2024-11-18 11:00:59.665079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.665487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:33.857 [2024-11-18 11:00:59.665499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:33.857 [2024-11-18 11:00:59.665508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:32:33.857 [2024-11-18 11:00:59.665516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.702121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.857 [2024-11-18 11:00:59.702162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:33.857 [2024-11-18 11:00:59.702175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.857 [2024-11-18 11:00:59.702184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.702277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.857 [2024-11-18 11:00:59.702288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:33.857 [2024-11-18 11:00:59.702298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.857 [2024-11-18 11:00:59.702308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.702375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.857 [2024-11-18 11:00:59.702386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:33.857 [2024-11-18 11:00:59.702400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.857 [2024-11-18 11:00:59.702409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:33.857 [2024-11-18 11:00:59.702427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:33.857 [2024-11-18 11:00:59.702437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:33.857 [2024-11-18 11:00:59.702446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:33.857 [2024-11-18 11:00:59.702454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.118 [2024-11-18 11:00:59.787330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.787563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:34.119 [2024-11-18 11:00:59.787583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.787592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.856967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:34.119 [2024-11-18 11:00:59.857025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:34.119 [2024-11-18 11:00:59.857144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:34.119 [2024-11-18 11:00:59.857244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:34.119 [2024-11-18 11:00:59.857358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:34.119 [2024-11-18 11:00:59.857415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:34.119 [2024-11-18 11:00:59.857483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:34.119 [2024-11-18 11:00:59.857551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:34.119 [2024-11-18 11:00:59.857560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:34.119 [2024-11-18 11:00:59.857568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.119 [2024-11-18 11:00:59.857704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 214.358 ms, result 0 00:32:35.061 00:32:35.061 00:32:35.061 11:01:00 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:36.977 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:36.977 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:36.977 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:36.977 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:37.239 Process with pid 81380 is not found 00:32:37.239 Remove shared memory files 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81380 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81380 ']' 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81380 00:32:37.239 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81380) - No such process 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81380 is not found' 00:32:37.239 11:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_band_md /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_l2p_l1 /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_l2p_l2 /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_l2p_l2_ctx /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_nvc_md /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_p2l_pool /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_sb /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_sb_shm /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_trim_bitmap /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_trim_log /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_trim_md /dev/hugepages/ftl_75642e4f-01b4-4066-b9aa-775e5d86fcd1_vmap 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:37.240 ************************************ 00:32:37.240 END TEST ftl_restore_fast 00:32:37.240 ************************************ 00:32:37.240 00:32:37.240 real 4m28.104s 00:32:37.240 user 4m15.532s 00:32:37.240 sys 0m12.359s 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.240 11:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@14 -- # killprocess 72148 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 72148 ']' 00:32:37.240 Process with pid 72148 is not found 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@958 -- # kill -0 72148 00:32:37.240 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72148) - No such process 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72148 is not found' 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:37.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84124 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84124 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@835 -- # '[' -z 84124 ']' 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.240 11:01:02 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.240 11:01:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:37.240 [2024-11-18 11:01:03.040693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:37.240 [2024-11-18 11:01:03.040984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84124 ] 00:32:37.502 [2024-11-18 11:01:03.202767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.502 [2024-11-18 11:01:03.320750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.445 11:01:03 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.445 11:01:03 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:38.445 11:01:03 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:38.445 nvme0n1 00:32:38.445 11:01:04 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:38.445 11:01:04 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:38.445 11:01:04 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:38.707 11:01:04 ftl -- ftl/common.sh@28 -- # stores=6ff8e192-5d96-4f06-aad3-875f80900ff5 00:32:38.707 11:01:04 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:38.707 11:01:04 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ff8e192-5d96-4f06-aad3-875f80900ff5 00:32:38.968 11:01:04 ftl -- ftl/ftl.sh@23 -- # killprocess 84124 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 84124 ']' 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@958 -- # kill -0 84124 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@959 -- # uname 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84124 00:32:38.968 killing process with pid 84124 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84124' 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@973 -- # kill 84124 00:32:38.968 11:01:04 ftl -- common/autotest_common.sh@978 -- # wait 84124 00:32:40.354 11:01:06 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:40.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:40.616 Waiting for block devices as requested 00:32:40.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.878 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.878 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:46.175 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:46.175 Remove shared memory files 00:32:46.175 11:01:11 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:46.175 11:01:11 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:46.175 11:01:11 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:46.175 11:01:11 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:46.175 11:01:11 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:46.175 11:01:11 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:46.175 11:01:11 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:46.175 ************************************ 00:32:46.175 END TEST ftl 00:32:46.175 ************************************ 00:32:46.175 00:32:46.175 real 18m8.594s 00:32:46.175 user 20m1.415s 00:32:46.175 sys 1m46.674s 00:32:46.175 11:01:11 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.175 11:01:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:46.175 11:01:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:46.175 11:01:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:46.175 11:01:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:46.175 11:01:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:46.175 11:01:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:46.175 11:01:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:46.175 11:01:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:46.175 11:01:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:46.175 11:01:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:46.175 11:01:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:46.175 11:01:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.175 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:32:46.175 11:01:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:46.175 11:01:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:46.175 11:01:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:46.175 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:32:47.561 INFO: APP EXITING 00:32:47.561 INFO: killing all VMs 00:32:47.561 INFO: killing vhost app 00:32:47.561 INFO: EXIT DONE 00:32:47.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:48.396 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:48.396 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:48.396 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:48.396 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:48.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:48.919 Cleaning 00:32:48.919 Removing: /var/run/dpdk/spdk0/config 00:32:48.919 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:48.919 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:48.919 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:48.919 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:48.919 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:49.181 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:49.181 Removing: /var/run/dpdk/spdk0 00:32:49.181 Removing: /var/run/dpdk/spdk_pid56890 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57087 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57299 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57392 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57426 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57549 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57561 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57755 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57848 00:32:49.181 Removing: /var/run/dpdk/spdk_pid57938 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58044 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58141 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58175 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58211 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58282 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58377 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58802 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58866 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58918 00:32:49.181 Removing: /var/run/dpdk/spdk_pid58934 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59025 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59041 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59132 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59143 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59201 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59214 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59267 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59280 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59434 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59470 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59554 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59726 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59804 00:32:49.181 Removing: /var/run/dpdk/spdk_pid59841 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60274 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60372 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60481 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60534 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60554 00:32:49.181 Removing: /var/run/dpdk/spdk_pid60638 00:32:49.181 Removing: /var/run/dpdk/spdk_pid61264 00:32:49.181 Removing: /var/run/dpdk/spdk_pid61300 00:32:49.181 Removing: /var/run/dpdk/spdk_pid61774 00:32:49.181 Removing: /var/run/dpdk/spdk_pid61871 00:32:49.181 Removing: /var/run/dpdk/spdk_pid61987 00:32:49.181 Removing: /var/run/dpdk/spdk_pid62029 00:32:49.181 Removing: /var/run/dpdk/spdk_pid62060 00:32:49.181 Removing: /var/run/dpdk/spdk_pid62086 00:32:49.181 Removing: /var/run/dpdk/spdk_pid63932 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64058 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64068 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64085 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64124 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64128 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64140 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64179 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64183 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64195 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64240 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64244 00:32:49.181 Removing: /var/run/dpdk/spdk_pid64256 00:32:49.181 Removing: /var/run/dpdk/spdk_pid65618 00:32:49.181 Removing: /var/run/dpdk/spdk_pid65715 00:32:49.181 Removing: /var/run/dpdk/spdk_pid67117 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68503 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68591 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68679 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68761 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68860 00:32:49.181 Removing: /var/run/dpdk/spdk_pid68932 00:32:49.181 Removing: /var/run/dpdk/spdk_pid69075 00:32:49.181 Removing: /var/run/dpdk/spdk_pid69436 00:32:49.181 Removing: /var/run/dpdk/spdk_pid69467 00:32:49.181 Removing: /var/run/dpdk/spdk_pid69910 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70098 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70195 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70305 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70352 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70378 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70677 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70738 00:32:49.181 Removing: /var/run/dpdk/spdk_pid70805 00:32:49.181 Removing: /var/run/dpdk/spdk_pid71197 00:32:49.181 Removing: /var/run/dpdk/spdk_pid71344 00:32:49.181 Removing: /var/run/dpdk/spdk_pid72148 00:32:49.181 Removing: /var/run/dpdk/spdk_pid72288 00:32:49.181 Removing: /var/run/dpdk/spdk_pid72446 00:32:49.181 Removing: /var/run/dpdk/spdk_pid72543 00:32:49.442 Removing: /var/run/dpdk/spdk_pid72830 00:32:49.442 Removing: /var/run/dpdk/spdk_pid73115 00:32:49.442 Removing: /var/run/dpdk/spdk_pid73473 00:32:49.442 Removing: /var/run/dpdk/spdk_pid73658 00:32:49.442 Removing: /var/run/dpdk/spdk_pid73811 00:32:49.442 Removing: /var/run/dpdk/spdk_pid73865 00:32:49.442 Removing: /var/run/dpdk/spdk_pid74041 00:32:49.442 Removing: /var/run/dpdk/spdk_pid74068 00:32:49.442 Removing: /var/run/dpdk/spdk_pid74127 00:32:49.442 Removing: /var/run/dpdk/spdk_pid74373 00:32:49.442 Removing: /var/run/dpdk/spdk_pid74606 00:32:49.442 Removing: /var/run/dpdk/spdk_pid75266 00:32:49.442 Removing: /var/run/dpdk/spdk_pid76072 00:32:49.442 Removing: /var/run/dpdk/spdk_pid76799 00:32:49.442 Removing: /var/run/dpdk/spdk_pid77556 00:32:49.442 Removing: /var/run/dpdk/spdk_pid77703 00:32:49.442 Removing: /var/run/dpdk/spdk_pid77790 00:32:49.442 Removing: /var/run/dpdk/spdk_pid78385 00:32:49.442 Removing: /var/run/dpdk/spdk_pid78438 00:32:49.442 Removing: /var/run/dpdk/spdk_pid79142 00:32:49.442 Removing: /var/run/dpdk/spdk_pid79581 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80361 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80489 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80533 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80601 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80652 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80715 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80895 00:32:49.442 Removing: /var/run/dpdk/spdk_pid80975 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81042 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81094 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81129 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81190 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81380 00:32:49.442 Removing: /var/run/dpdk/spdk_pid81605 00:32:49.442 Removing: /var/run/dpdk/spdk_pid82111 00:32:49.442 Removing: /var/run/dpdk/spdk_pid82816 00:32:49.442 Removing: /var/run/dpdk/spdk_pid83406 00:32:49.442 Removing: /var/run/dpdk/spdk_pid84124 00:32:49.442 Clean 00:32:49.442 11:01:15 -- common/autotest_common.sh@1453 -- # return 0 00:32:49.442 11:01:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:49.442 11:01:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.442 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:32:49.442 11:01:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:49.442 11:01:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.443 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:32:49.704 11:01:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:49.704 11:01:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:49.704 11:01:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:49.704 11:01:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:49.704 11:01:15 -- spdk/autotest.sh@398 -- # hostname 00:32:49.704 11:01:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:49.704 geninfo: WARNING: invalid characters removed from testname! 00:33:16.285 11:01:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:18.834 11:01:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:20.744 11:01:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.648 11:01:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:24.092 11:01:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:26.000 11:01:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.901 11:01:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:27.901 11:01:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:27.901 11:01:53 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:27.901 11:01:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:27.902 11:01:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:27.902 11:01:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:27.902 + [[ -n 5028 ]] 00:33:27.902 + sudo kill 5028 00:33:27.912 [Pipeline] } 00:33:27.928 [Pipeline] // timeout 00:33:27.933 [Pipeline] } 00:33:27.948 [Pipeline] // stage 00:33:27.954 [Pipeline] } 00:33:27.968 [Pipeline] // catchError 00:33:27.978 [Pipeline] stage 00:33:27.980 [Pipeline] { (Stop VM) 00:33:27.993 [Pipeline] sh 00:33:28.278 + vagrant halt 00:33:30.825 ==> default: Halting domain... 00:33:36.130 [Pipeline] sh 00:33:36.414 + vagrant destroy -f 00:33:38.957 ==> default: Removing domain... 00:33:39.544 [Pipeline] sh 00:33:39.828 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:39.837 [Pipeline] } 00:33:39.852 [Pipeline] // stage 00:33:39.857 [Pipeline] } 00:33:39.871 [Pipeline] // dir 00:33:39.876 [Pipeline] } 00:33:39.891 [Pipeline] // wrap 00:33:39.897 [Pipeline] } 00:33:39.911 [Pipeline] // catchError 00:33:39.921 [Pipeline] stage 00:33:39.924 [Pipeline] { (Epilogue) 00:33:39.937 [Pipeline] sh 00:33:40.224 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:45.518 [Pipeline] catchError 00:33:45.520 [Pipeline] { 00:33:45.533 [Pipeline] sh 00:33:45.818 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:45.818 Artifacts sizes are good 00:33:45.829 [Pipeline] } 00:33:45.843 [Pipeline] // catchError 00:33:45.855 [Pipeline] archiveArtifacts 00:33:45.863 Archiving artifacts 00:33:46.027 [Pipeline] cleanWs 00:33:46.046 [WS-CLEANUP] Deleting project workspace... 00:33:46.046 [WS-CLEANUP] Deferred wipeout is used... 00:33:46.058 [WS-CLEANUP] done 00:33:46.060 [Pipeline] } 00:33:46.073 [Pipeline] // stage 00:33:46.078 [Pipeline] } 00:33:46.091 [Pipeline] // node 00:33:46.096 [Pipeline] End of Pipeline 00:33:46.136 Finished: SUCCESS